00:00:00.023 Started by upstream project "autotest-per-patch" build number 127120 00:00:00.023 originally caused by: 00:00:00.024 Started by user sys_sgci 00:00:00.109 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.110 The recommended git tool is: git 00:00:00.110 using credential 00000000-0000-0000-0000-000000000002 00:00:00.112 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.156 Fetching changes from the remote Git repository 00:00:00.157 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.196 Using shallow fetch with depth 1 00:00:00.196 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.196 > git --version # timeout=10 00:00:00.237 > git --version # 'git version 2.39.2' 00:00:00.237 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.256 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.256 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.542 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.552 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.562 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:06.562 > git config core.sparsecheckout # timeout=10 00:00:06.575 > git read-tree -mu HEAD # timeout=10 00:00:06.592 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:06.624 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:06.625 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:06.711 [Pipeline] Start of Pipeline 00:00:06.724 [Pipeline] library 00:00:06.725 Loading library shm_lib@master 00:00:06.725 Library shm_lib@master is cached. Copying from home. 00:00:06.739 [Pipeline] node 00:00:06.747 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.750 [Pipeline] { 00:00:06.758 [Pipeline] catchError 00:00:06.759 [Pipeline] { 00:00:06.768 [Pipeline] wrap 00:00:06.774 [Pipeline] { 00:00:06.780 [Pipeline] stage 00:00:06.781 [Pipeline] { (Prologue) 00:00:06.982 [Pipeline] sh 00:00:07.268 + logger -p user.info -t JENKINS-CI 00:00:07.286 [Pipeline] echo 00:00:07.287 Node: WFP8 00:00:07.292 [Pipeline] sh 00:00:07.591 [Pipeline] setCustomBuildProperty 00:00:07.600 [Pipeline] echo 00:00:07.601 Cleanup processes 00:00:07.605 [Pipeline] sh 00:00:07.885 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.885 609910 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.896 [Pipeline] sh 00:00:08.181 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.181 ++ grep -v 'sudo pgrep' 00:00:08.181 ++ awk '{print $1}' 00:00:08.181 + sudo kill -9 00:00:08.181 + true 00:00:08.194 [Pipeline] cleanWs 00:00:08.203 [WS-CLEANUP] Deleting project workspace... 00:00:08.203 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.209 [WS-CLEANUP] done 00:00:08.211 [Pipeline] setCustomBuildProperty 00:00:08.221 [Pipeline] sh 00:00:08.501 + sudo git config --global --replace-all safe.directory '*' 00:00:08.561 [Pipeline] httpRequest 00:00:08.660 [Pipeline] echo 00:00:08.661 Sorcerer 10.211.164.101 is alive 00:00:08.668 [Pipeline] httpRequest 00:00:08.671 HttpMethod: GET 00:00:08.672 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:08.672 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:08.695 Response Code: HTTP/1.1 200 OK 00:00:08.696 Success: Status code 200 is in the accepted range: 200,404 00:00:08.696 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:27.807 [Pipeline] sh 00:00:28.091 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:28.106 [Pipeline] httpRequest 00:00:28.144 [Pipeline] echo 00:00:28.146 Sorcerer 10.211.164.101 is alive 00:00:28.154 [Pipeline] httpRequest 00:00:28.159 HttpMethod: GET 00:00:28.159 URL: http://10.211.164.101/packages/spdk_3c25cfe1d27e578d46d5823ea704025d22b41d86.tar.gz 00:00:28.159 Sending request to url: http://10.211.164.101/packages/spdk_3c25cfe1d27e578d46d5823ea704025d22b41d86.tar.gz 00:00:28.166 Response Code: HTTP/1.1 200 OK 00:00:28.166 Success: Status code 200 is in the accepted range: 200,404 00:00:28.167 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_3c25cfe1d27e578d46d5823ea704025d22b41d86.tar.gz 00:02:45.320 [Pipeline] sh 00:02:45.614 + tar --no-same-owner -xf spdk_3c25cfe1d27e578d46d5823ea704025d22b41d86.tar.gz 00:02:48.174 [Pipeline] sh 00:02:48.462 + git -C spdk log --oneline -n5 00:02:48.462 3c25cfe1d raid: Generic changes to support DIF/DIX for RAID 00:02:48.462 0e983c564 nvmf/tcp: use sock group polling for the listening sockets 00:02:48.462 cff943742 nvmf/tcp: add transport field to the spdk_nvmf_tcp_port struct 00:02:48.462 13fe888c9 nvmf: add helper function to get a transport poll group 00:02:48.462 02f272e46 test/dma: Fix ibv_reg_mr usage 00:02:48.475 [Pipeline] } 00:02:48.492 [Pipeline] // stage 00:02:48.501 [Pipeline] stage 00:02:48.503 [Pipeline] { (Prepare) 00:02:48.520 [Pipeline] writeFile 00:02:48.537 [Pipeline] sh 00:02:48.827 + logger -p user.info -t JENKINS-CI 00:02:48.842 [Pipeline] sh 00:02:49.132 + logger -p user.info -t JENKINS-CI 00:02:49.145 [Pipeline] sh 00:02:49.434 + cat autorun-spdk.conf 00:02:49.434 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:49.434 SPDK_TEST_NVMF=1 00:02:49.434 SPDK_TEST_NVME_CLI=1 00:02:49.434 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:49.434 SPDK_TEST_NVMF_NICS=e810 00:02:49.434 SPDK_TEST_VFIOUSER=1 00:02:49.434 SPDK_RUN_UBSAN=1 00:02:49.434 NET_TYPE=phy 00:02:49.442 RUN_NIGHTLY=0 00:02:49.447 [Pipeline] readFile 00:02:49.474 [Pipeline] withEnv 00:02:49.476 [Pipeline] { 00:02:49.490 [Pipeline] sh 00:02:49.778 + set -ex 00:02:49.778 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:49.778 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:49.778 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:49.778 ++ SPDK_TEST_NVMF=1 00:02:49.778 ++ SPDK_TEST_NVME_CLI=1 00:02:49.778 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:49.778 ++ SPDK_TEST_NVMF_NICS=e810 00:02:49.778 ++ SPDK_TEST_VFIOUSER=1 00:02:49.778 ++ SPDK_RUN_UBSAN=1 00:02:49.778 ++ NET_TYPE=phy 00:02:49.778 ++ RUN_NIGHTLY=0 00:02:49.778 + case $SPDK_TEST_NVMF_NICS in 00:02:49.778 + DRIVERS=ice 00:02:49.778 + [[ tcp == \r\d\m\a ]] 00:02:49.778 + [[ -n ice ]] 00:02:49.778 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:49.778 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:49.778 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:49.778 rmmod: ERROR: Module irdma is not currently loaded 00:02:49.778 rmmod: ERROR: Module i40iw is not currently loaded 00:02:49.778 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:49.778 + true 00:02:49.778 + for D in $DRIVERS 00:02:49.778 + sudo modprobe ice 00:02:49.778 + exit 0 00:02:49.788 [Pipeline] } 00:02:49.805 [Pipeline] // withEnv 00:02:49.810 [Pipeline] } 00:02:49.826 [Pipeline] // stage 00:02:49.835 [Pipeline] catchError 00:02:49.837 [Pipeline] { 00:02:49.852 [Pipeline] timeout 00:02:49.852 Timeout set to expire in 50 min 00:02:49.854 [Pipeline] { 00:02:49.869 [Pipeline] stage 00:02:49.871 [Pipeline] { (Tests) 00:02:49.886 [Pipeline] sh 00:02:50.178 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:50.178 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:50.178 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:50.178 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:50.178 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:50.178 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:50.178 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:50.178 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:50.178 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:50.178 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:50.178 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:50.178 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:50.178 + source /etc/os-release 00:02:50.178 ++ NAME='Fedora Linux' 00:02:50.178 ++ VERSION='38 (Cloud Edition)' 00:02:50.178 ++ ID=fedora 00:02:50.178 ++ VERSION_ID=38 00:02:50.178 ++ VERSION_CODENAME= 00:02:50.178 ++ PLATFORM_ID=platform:f38 00:02:50.178 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:50.178 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:50.178 ++ LOGO=fedora-logo-icon 00:02:50.178 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:50.178 ++ HOME_URL=https://fedoraproject.org/ 00:02:50.178 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:50.178 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:50.178 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:50.178 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:50.178 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:50.178 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:50.178 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:50.178 ++ SUPPORT_END=2024-05-14 00:02:50.178 ++ VARIANT='Cloud Edition' 00:02:50.178 ++ VARIANT_ID=cloud 00:02:50.178 + uname -a 00:02:50.178 Linux spdk-wfp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:50.178 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:52.731 Hugepages 00:02:52.731 node hugesize free / total 00:02:52.731 node0 1048576kB 0 / 0 00:02:52.731 node0 2048kB 0 / 0 00:02:52.731 node1 1048576kB 0 / 0 00:02:52.731 node1 2048kB 0 / 0 00:02:52.731 00:02:52.731 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:52.731 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:52.731 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:52.731 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:52.731 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:52.731 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:52.731 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:52.731 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:52.731 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:52.731 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:52.731 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:52.731 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:52.731 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:52.731 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:52.731 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:52.731 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:52.731 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:52.731 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:52.731 + rm -f /tmp/spdk-ld-path 00:02:52.731 + source autorun-spdk.conf 00:02:52.731 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:52.731 ++ SPDK_TEST_NVMF=1 00:02:52.731 ++ SPDK_TEST_NVME_CLI=1 00:02:52.731 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:52.731 ++ SPDK_TEST_NVMF_NICS=e810 00:02:52.731 ++ SPDK_TEST_VFIOUSER=1 00:02:52.731 ++ SPDK_RUN_UBSAN=1 00:02:52.731 ++ NET_TYPE=phy 00:02:52.731 ++ RUN_NIGHTLY=0 00:02:52.731 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:52.731 + [[ -n '' ]] 00:02:52.731 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:52.731 + for M in /var/spdk/build-*-manifest.txt 00:02:52.731 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:52.731 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:52.731 + for M in /var/spdk/build-*-manifest.txt 00:02:52.731 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:52.731 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:52.731 ++ uname 00:02:52.731 + [[ Linux == \L\i\n\u\x ]] 00:02:52.731 + sudo dmesg -T 00:02:52.731 + sudo dmesg --clear 00:02:52.731 + dmesg_pid=611366 00:02:52.731 + [[ Fedora Linux == FreeBSD ]] 00:02:52.731 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:52.731 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:52.731 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:52.731 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:52.731 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:52.731 + [[ -x /usr/src/fio-static/fio ]] 00:02:52.731 + export FIO_BIN=/usr/src/fio-static/fio 00:02:52.731 + FIO_BIN=/usr/src/fio-static/fio 00:02:52.731 + sudo dmesg -Tw 00:02:52.731 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:52.731 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:52.731 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:52.731 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:52.731 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:52.731 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:52.731 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:52.731 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:52.731 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:52.731 Test configuration: 00:02:52.731 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:52.731 SPDK_TEST_NVMF=1 00:02:52.731 SPDK_TEST_NVME_CLI=1 00:02:52.731 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:52.731 SPDK_TEST_NVMF_NICS=e810 00:02:52.731 SPDK_TEST_VFIOUSER=1 00:02:52.731 SPDK_RUN_UBSAN=1 00:02:52.731 NET_TYPE=phy 00:02:52.731 RUN_NIGHTLY=0 01:03:15 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:52.731 01:03:15 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:52.731 01:03:15 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:52.731 01:03:15 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:52.731 01:03:15 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.731 01:03:15 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.731 01:03:15 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.731 01:03:15 -- paths/export.sh@5 -- $ export PATH 00:02:52.731 01:03:15 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.731 01:03:15 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:52.731 01:03:15 -- common/autobuild_common.sh@444 -- $ date +%s 00:02:52.731 01:03:15 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721862195.XXXXXX 00:02:52.731 01:03:15 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721862195.zV2sEg 00:02:52.731 01:03:15 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:02:52.731 01:03:15 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:02:52.731 01:03:15 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:52.731 01:03:15 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:52.731 01:03:15 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:52.731 01:03:15 -- common/autobuild_common.sh@460 -- $ get_config_params 00:02:52.731 01:03:15 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:52.731 01:03:15 -- common/autotest_common.sh@10 -- $ set +x 00:02:52.731 01:03:15 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:52.731 01:03:15 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:02:52.731 01:03:15 -- pm/common@17 -- $ local monitor 00:02:52.731 01:03:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.731 01:03:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.731 01:03:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.731 01:03:15 -- pm/common@21 -- $ date +%s 00:02:52.731 01:03:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.731 01:03:15 -- pm/common@21 -- $ date +%s 00:02:52.731 01:03:15 -- pm/common@25 -- $ sleep 1 00:02:52.731 01:03:15 -- pm/common@21 -- $ date +%s 00:02:52.731 01:03:15 -- pm/common@21 -- $ date +%s 00:02:52.731 01:03:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721862195 00:02:52.731 01:03:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721862195 00:02:52.731 01:03:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721862195 00:02:52.731 01:03:15 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721862195 00:02:52.993 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721862195_collect-vmstat.pm.log 00:02:52.993 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721862195_collect-cpu-load.pm.log 00:02:52.993 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721862195_collect-cpu-temp.pm.log 00:02:52.993 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721862195_collect-bmc-pm.bmc.pm.log 00:02:53.935 01:03:16 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:02:53.935 01:03:16 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:53.935 01:03:16 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:53.935 01:03:16 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:53.935 01:03:16 -- spdk/autobuild.sh@16 -- $ date -u 00:02:53.935 Wed Jul 24 11:03:16 PM UTC 2024 00:02:53.935 01:03:16 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:53.935 v24.09-pre-224-g3c25cfe1d 00:02:53.935 01:03:16 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:53.935 01:03:16 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:53.935 01:03:16 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:53.935 01:03:16 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:53.935 01:03:16 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:53.935 01:03:16 -- common/autotest_common.sh@10 -- $ set +x 00:02:53.935 ************************************ 00:02:53.935 START TEST ubsan 00:02:53.935 ************************************ 00:02:53.935 01:03:16 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:02:53.935 using ubsan 00:02:53.935 00:02:53.935 real 0m0.000s 00:02:53.935 user 0m0.000s 00:02:53.935 sys 0m0.000s 00:02:53.935 01:03:16 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:53.935 01:03:16 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:53.935 ************************************ 00:02:53.935 END TEST ubsan 00:02:53.935 ************************************ 00:02:53.935 01:03:16 -- common/autotest_common.sh@1142 -- $ return 0 00:02:53.935 01:03:16 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:53.935 01:03:16 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:53.935 01:03:16 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:53.935 01:03:16 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:53.935 01:03:16 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:53.935 01:03:16 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:53.935 01:03:16 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:53.935 01:03:16 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:53.935 01:03:16 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:54.195 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:54.195 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:54.455 Using 'verbs' RDMA provider 00:03:07.254 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:19.474 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:19.474 Creating mk/config.mk...done. 00:03:19.474 Creating mk/cc.flags.mk...done. 00:03:19.474 Type 'make' to build. 00:03:19.474 01:03:40 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:03:19.474 01:03:40 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:03:19.474 01:03:40 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:19.474 01:03:40 -- common/autotest_common.sh@10 -- $ set +x 00:03:19.474 ************************************ 00:03:19.474 START TEST make 00:03:19.474 ************************************ 00:03:19.474 01:03:40 make -- common/autotest_common.sh@1123 -- $ make -j96 00:03:19.474 make[1]: Nothing to be done for 'all'. 00:03:19.739 The Meson build system 00:03:19.739 Version: 1.3.1 00:03:19.739 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:19.739 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:19.739 Build type: native build 00:03:19.739 Project name: libvfio-user 00:03:19.739 Project version: 0.0.1 00:03:19.739 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:19.739 C linker for the host machine: cc ld.bfd 2.39-16 00:03:19.739 Host machine cpu family: x86_64 00:03:19.739 Host machine cpu: x86_64 00:03:19.739 Run-time dependency threads found: YES 00:03:19.739 Library dl found: YES 00:03:19.739 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:19.739 Run-time dependency json-c found: YES 0.17 00:03:19.739 Run-time dependency cmocka found: YES 1.1.7 00:03:19.739 Program pytest-3 found: NO 00:03:19.739 Program flake8 found: NO 00:03:19.739 Program misspell-fixer found: NO 00:03:19.739 Program restructuredtext-lint found: NO 00:03:19.739 Program valgrind found: YES (/usr/bin/valgrind) 00:03:19.739 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:19.739 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:19.739 Compiler for C supports arguments -Wwrite-strings: YES 00:03:19.739 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:19.739 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:19.739 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:19.739 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:19.739 Build targets in project: 8 00:03:19.739 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:19.739 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:19.739 00:03:19.739 libvfio-user 0.0.1 00:03:19.739 00:03:19.739 User defined options 00:03:19.739 buildtype : debug 00:03:19.739 default_library: shared 00:03:19.739 libdir : /usr/local/lib 00:03:19.739 00:03:19.739 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:20.306 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:20.306 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:20.306 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:20.306 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:20.306 [4/37] Compiling C object samples/null.p/null.c.o 00:03:20.306 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:20.306 [6/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:20.306 [7/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:20.306 [8/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:20.306 [9/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:20.306 [10/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:20.306 [11/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:20.306 [12/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:20.306 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:20.306 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:20.306 [15/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:20.306 [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:20.306 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:20.306 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:20.306 [19/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:20.306 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:20.306 [21/37] Compiling C object samples/server.p/server.c.o 00:03:20.306 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:20.306 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:20.306 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:20.306 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:20.564 [26/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:20.564 [27/37] Compiling C object samples/client.p/client.c.o 00:03:20.564 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:20.564 [29/37] Linking target samples/client 00:03:20.564 [30/37] Linking target test/unit_tests 00:03:20.564 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:03:20.564 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:20.564 [33/37] Linking target samples/null 00:03:20.564 [34/37] Linking target samples/gpio-pci-idio-16 00:03:20.564 [35/37] Linking target samples/lspci 00:03:20.564 [36/37] Linking target samples/shadow_ioeventfd_server 00:03:20.564 [37/37] Linking target samples/server 00:03:20.564 INFO: autodetecting backend as ninja 00:03:20.564 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:20.823 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:21.080 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:21.080 ninja: no work to do. 00:03:26.354 The Meson build system 00:03:26.354 Version: 1.3.1 00:03:26.354 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:03:26.354 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:03:26.354 Build type: native build 00:03:26.354 Program cat found: YES (/usr/bin/cat) 00:03:26.354 Project name: DPDK 00:03:26.354 Project version: 24.03.0 00:03:26.354 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:26.354 C linker for the host machine: cc ld.bfd 2.39-16 00:03:26.354 Host machine cpu family: x86_64 00:03:26.354 Host machine cpu: x86_64 00:03:26.354 Message: ## Building in Developer Mode ## 00:03:26.354 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:26.354 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:03:26.354 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:26.354 Program python3 found: YES (/usr/bin/python3) 00:03:26.354 Program cat found: YES (/usr/bin/cat) 00:03:26.354 Compiler for C supports arguments -march=native: YES 00:03:26.354 Checking for size of "void *" : 8 00:03:26.354 Checking for size of "void *" : 8 (cached) 00:03:26.354 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:03:26.354 Library m found: YES 00:03:26.354 Library numa found: YES 00:03:26.354 Has header "numaif.h" : YES 00:03:26.354 Library fdt found: NO 00:03:26.354 Library execinfo found: NO 00:03:26.354 Has header "execinfo.h" : YES 00:03:26.354 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:26.354 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:26.354 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:26.354 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:26.354 Run-time dependency openssl found: YES 3.0.9 00:03:26.354 Run-time dependency libpcap found: YES 1.10.4 00:03:26.354 Has header "pcap.h" with dependency libpcap: YES 00:03:26.354 Compiler for C supports arguments -Wcast-qual: YES 00:03:26.354 Compiler for C supports arguments -Wdeprecated: YES 00:03:26.354 Compiler for C supports arguments -Wformat: YES 00:03:26.354 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:26.354 Compiler for C supports arguments -Wformat-security: NO 00:03:26.354 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:26.354 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:26.354 Compiler for C supports arguments -Wnested-externs: YES 00:03:26.354 Compiler for C supports arguments -Wold-style-definition: YES 00:03:26.354 Compiler for C supports arguments -Wpointer-arith: YES 00:03:26.354 Compiler for C supports arguments -Wsign-compare: YES 00:03:26.354 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:26.354 Compiler for C supports arguments -Wundef: YES 00:03:26.354 Compiler for C supports arguments -Wwrite-strings: YES 00:03:26.354 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:26.354 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:26.354 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:26.354 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:26.354 Program objdump found: YES (/usr/bin/objdump) 00:03:26.354 Compiler for C supports arguments -mavx512f: YES 00:03:26.354 Checking if "AVX512 checking" compiles: YES 00:03:26.354 Fetching value of define "__SSE4_2__" : 1 00:03:26.354 Fetching value of define "__AES__" : 1 00:03:26.354 Fetching value of define "__AVX__" : 1 00:03:26.354 Fetching value of define "__AVX2__" : 1 00:03:26.354 Fetching value of define "__AVX512BW__" : 1 00:03:26.354 Fetching value of define "__AVX512CD__" : 1 00:03:26.354 Fetching value of define "__AVX512DQ__" : 1 00:03:26.354 Fetching value of define "__AVX512F__" : 1 00:03:26.354 Fetching value of define "__AVX512VL__" : 1 00:03:26.354 Fetching value of define "__PCLMUL__" : 1 00:03:26.354 Fetching value of define "__RDRND__" : 1 00:03:26.354 Fetching value of define "__RDSEED__" : 1 00:03:26.354 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:26.354 Fetching value of define "__znver1__" : (undefined) 00:03:26.354 Fetching value of define "__znver2__" : (undefined) 00:03:26.354 Fetching value of define "__znver3__" : (undefined) 00:03:26.354 Fetching value of define "__znver4__" : (undefined) 00:03:26.354 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:26.354 Message: lib/log: Defining dependency "log" 00:03:26.354 Message: lib/kvargs: Defining dependency "kvargs" 00:03:26.354 Message: lib/telemetry: Defining dependency "telemetry" 00:03:26.354 Checking for function "getentropy" : NO 00:03:26.354 Message: lib/eal: Defining dependency "eal" 00:03:26.354 Message: lib/ring: Defining dependency "ring" 00:03:26.354 Message: lib/rcu: Defining dependency "rcu" 00:03:26.354 Message: lib/mempool: Defining dependency "mempool" 00:03:26.354 Message: lib/mbuf: Defining dependency "mbuf" 00:03:26.355 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:26.355 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:26.355 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:26.355 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:26.355 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:26.355 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:26.355 Compiler for C supports arguments -mpclmul: YES 00:03:26.355 Compiler for C supports arguments -maes: YES 00:03:26.355 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:26.355 Compiler for C supports arguments -mavx512bw: YES 00:03:26.355 Compiler for C supports arguments -mavx512dq: YES 00:03:26.355 Compiler for C supports arguments -mavx512vl: YES 00:03:26.355 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:26.355 Compiler for C supports arguments -mavx2: YES 00:03:26.355 Compiler for C supports arguments -mavx: YES 00:03:26.355 Message: lib/net: Defining dependency "net" 00:03:26.355 Message: lib/meter: Defining dependency "meter" 00:03:26.355 Message: lib/ethdev: Defining dependency "ethdev" 00:03:26.355 Message: lib/pci: Defining dependency "pci" 00:03:26.355 Message: lib/cmdline: Defining dependency "cmdline" 00:03:26.355 Message: lib/hash: Defining dependency "hash" 00:03:26.355 Message: lib/timer: Defining dependency "timer" 00:03:26.355 Message: lib/compressdev: Defining dependency "compressdev" 00:03:26.355 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:26.355 Message: lib/dmadev: Defining dependency "dmadev" 00:03:26.355 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:26.355 Message: lib/power: Defining dependency "power" 00:03:26.355 Message: lib/reorder: Defining dependency "reorder" 00:03:26.355 Message: lib/security: Defining dependency "security" 00:03:26.355 Has header "linux/userfaultfd.h" : YES 00:03:26.355 Has header "linux/vduse.h" : YES 00:03:26.355 Message: lib/vhost: Defining dependency "vhost" 00:03:26.355 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:26.355 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:26.355 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:26.355 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:26.355 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:26.355 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:26.355 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:26.355 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:26.355 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:26.355 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:26.355 Program doxygen found: YES (/usr/bin/doxygen) 00:03:26.355 Configuring doxy-api-html.conf using configuration 00:03:26.355 Configuring doxy-api-man.conf using configuration 00:03:26.355 Program mandb found: YES (/usr/bin/mandb) 00:03:26.355 Program sphinx-build found: NO 00:03:26.355 Configuring rte_build_config.h using configuration 00:03:26.355 Message: 00:03:26.355 ================= 00:03:26.355 Applications Enabled 00:03:26.355 ================= 00:03:26.355 00:03:26.355 apps: 00:03:26.355 00:03:26.355 00:03:26.355 Message: 00:03:26.355 ================= 00:03:26.355 Libraries Enabled 00:03:26.355 ================= 00:03:26.355 00:03:26.355 libs: 00:03:26.355 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:26.355 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:26.355 cryptodev, dmadev, power, reorder, security, vhost, 00:03:26.355 00:03:26.355 Message: 00:03:26.355 =============== 00:03:26.355 Drivers Enabled 00:03:26.355 =============== 00:03:26.355 00:03:26.355 common: 00:03:26.355 00:03:26.355 bus: 00:03:26.355 pci, vdev, 00:03:26.355 mempool: 00:03:26.355 ring, 00:03:26.355 dma: 00:03:26.355 00:03:26.355 net: 00:03:26.355 00:03:26.355 crypto: 00:03:26.355 00:03:26.355 compress: 00:03:26.355 00:03:26.355 vdpa: 00:03:26.355 00:03:26.355 00:03:26.355 Message: 00:03:26.355 ================= 00:03:26.355 Content Skipped 00:03:26.355 ================= 00:03:26.355 00:03:26.355 apps: 00:03:26.355 dumpcap: explicitly disabled via build config 00:03:26.355 graph: explicitly disabled via build config 00:03:26.355 pdump: explicitly disabled via build config 00:03:26.355 proc-info: explicitly disabled via build config 00:03:26.355 test-acl: explicitly disabled via build config 00:03:26.355 test-bbdev: explicitly disabled via build config 00:03:26.355 test-cmdline: explicitly disabled via build config 00:03:26.355 test-compress-perf: explicitly disabled via build config 00:03:26.355 test-crypto-perf: explicitly disabled via build config 00:03:26.355 test-dma-perf: explicitly disabled via build config 00:03:26.355 test-eventdev: explicitly disabled via build config 00:03:26.355 test-fib: explicitly disabled via build config 00:03:26.355 test-flow-perf: explicitly disabled via build config 00:03:26.355 test-gpudev: explicitly disabled via build config 00:03:26.355 test-mldev: explicitly disabled via build config 00:03:26.355 test-pipeline: explicitly disabled via build config 00:03:26.355 test-pmd: explicitly disabled via build config 00:03:26.355 test-regex: explicitly disabled via build config 00:03:26.355 test-sad: explicitly disabled via build config 00:03:26.355 test-security-perf: explicitly disabled via build config 00:03:26.355 00:03:26.355 libs: 00:03:26.355 argparse: explicitly disabled via build config 00:03:26.355 metrics: explicitly disabled via build config 00:03:26.355 acl: explicitly disabled via build config 00:03:26.355 bbdev: explicitly disabled via build config 00:03:26.355 bitratestats: explicitly disabled via build config 00:03:26.355 bpf: explicitly disabled via build config 00:03:26.355 cfgfile: explicitly disabled via build config 00:03:26.355 distributor: explicitly disabled via build config 00:03:26.355 efd: explicitly disabled via build config 00:03:26.355 eventdev: explicitly disabled via build config 00:03:26.355 dispatcher: explicitly disabled via build config 00:03:26.355 gpudev: explicitly disabled via build config 00:03:26.355 gro: explicitly disabled via build config 00:03:26.355 gso: explicitly disabled via build config 00:03:26.355 ip_frag: explicitly disabled via build config 00:03:26.355 jobstats: explicitly disabled via build config 00:03:26.355 latencystats: explicitly disabled via build config 00:03:26.355 lpm: explicitly disabled via build config 00:03:26.355 member: explicitly disabled via build config 00:03:26.355 pcapng: explicitly disabled via build config 00:03:26.355 rawdev: explicitly disabled via build config 00:03:26.355 regexdev: explicitly disabled via build config 00:03:26.355 mldev: explicitly disabled via build config 00:03:26.355 rib: explicitly disabled via build config 00:03:26.355 sched: explicitly disabled via build config 00:03:26.355 stack: explicitly disabled via build config 00:03:26.355 ipsec: explicitly disabled via build config 00:03:26.355 pdcp: explicitly disabled via build config 00:03:26.355 fib: explicitly disabled via build config 00:03:26.355 port: explicitly disabled via build config 00:03:26.355 pdump: explicitly disabled via build config 00:03:26.355 table: explicitly disabled via build config 00:03:26.355 pipeline: explicitly disabled via build config 00:03:26.355 graph: explicitly disabled via build config 00:03:26.355 node: explicitly disabled via build config 00:03:26.355 00:03:26.355 drivers: 00:03:26.355 common/cpt: not in enabled drivers build config 00:03:26.355 common/dpaax: not in enabled drivers build config 00:03:26.355 common/iavf: not in enabled drivers build config 00:03:26.355 common/idpf: not in enabled drivers build config 00:03:26.355 common/ionic: not in enabled drivers build config 00:03:26.355 common/mvep: not in enabled drivers build config 00:03:26.355 common/octeontx: not in enabled drivers build config 00:03:26.355 bus/auxiliary: not in enabled drivers build config 00:03:26.355 bus/cdx: not in enabled drivers build config 00:03:26.355 bus/dpaa: not in enabled drivers build config 00:03:26.355 bus/fslmc: not in enabled drivers build config 00:03:26.355 bus/ifpga: not in enabled drivers build config 00:03:26.355 bus/platform: not in enabled drivers build config 00:03:26.355 bus/uacce: not in enabled drivers build config 00:03:26.355 bus/vmbus: not in enabled drivers build config 00:03:26.355 common/cnxk: not in enabled drivers build config 00:03:26.355 common/mlx5: not in enabled drivers build config 00:03:26.355 common/nfp: not in enabled drivers build config 00:03:26.355 common/nitrox: not in enabled drivers build config 00:03:26.355 common/qat: not in enabled drivers build config 00:03:26.355 common/sfc_efx: not in enabled drivers build config 00:03:26.355 mempool/bucket: not in enabled drivers build config 00:03:26.355 mempool/cnxk: not in enabled drivers build config 00:03:26.355 mempool/dpaa: not in enabled drivers build config 00:03:26.355 mempool/dpaa2: not in enabled drivers build config 00:03:26.355 mempool/octeontx: not in enabled drivers build config 00:03:26.355 mempool/stack: not in enabled drivers build config 00:03:26.355 dma/cnxk: not in enabled drivers build config 00:03:26.355 dma/dpaa: not in enabled drivers build config 00:03:26.355 dma/dpaa2: not in enabled drivers build config 00:03:26.355 dma/hisilicon: not in enabled drivers build config 00:03:26.355 dma/idxd: not in enabled drivers build config 00:03:26.355 dma/ioat: not in enabled drivers build config 00:03:26.355 dma/skeleton: not in enabled drivers build config 00:03:26.355 net/af_packet: not in enabled drivers build config 00:03:26.355 net/af_xdp: not in enabled drivers build config 00:03:26.355 net/ark: not in enabled drivers build config 00:03:26.355 net/atlantic: not in enabled drivers build config 00:03:26.355 net/avp: not in enabled drivers build config 00:03:26.355 net/axgbe: not in enabled drivers build config 00:03:26.355 net/bnx2x: not in enabled drivers build config 00:03:26.355 net/bnxt: not in enabled drivers build config 00:03:26.355 net/bonding: not in enabled drivers build config 00:03:26.355 net/cnxk: not in enabled drivers build config 00:03:26.355 net/cpfl: not in enabled drivers build config 00:03:26.355 net/cxgbe: not in enabled drivers build config 00:03:26.355 net/dpaa: not in enabled drivers build config 00:03:26.355 net/dpaa2: not in enabled drivers build config 00:03:26.355 net/e1000: not in enabled drivers build config 00:03:26.355 net/ena: not in enabled drivers build config 00:03:26.355 net/enetc: not in enabled drivers build config 00:03:26.355 net/enetfec: not in enabled drivers build config 00:03:26.355 net/enic: not in enabled drivers build config 00:03:26.355 net/failsafe: not in enabled drivers build config 00:03:26.355 net/fm10k: not in enabled drivers build config 00:03:26.355 net/gve: not in enabled drivers build config 00:03:26.355 net/hinic: not in enabled drivers build config 00:03:26.355 net/hns3: not in enabled drivers build config 00:03:26.356 net/i40e: not in enabled drivers build config 00:03:26.356 net/iavf: not in enabled drivers build config 00:03:26.356 net/ice: not in enabled drivers build config 00:03:26.356 net/idpf: not in enabled drivers build config 00:03:26.356 net/igc: not in enabled drivers build config 00:03:26.356 net/ionic: not in enabled drivers build config 00:03:26.356 net/ipn3ke: not in enabled drivers build config 00:03:26.356 net/ixgbe: not in enabled drivers build config 00:03:26.356 net/mana: not in enabled drivers build config 00:03:26.356 net/memif: not in enabled drivers build config 00:03:26.356 net/mlx4: not in enabled drivers build config 00:03:26.356 net/mlx5: not in enabled drivers build config 00:03:26.356 net/mvneta: not in enabled drivers build config 00:03:26.356 net/mvpp2: not in enabled drivers build config 00:03:26.356 net/netvsc: not in enabled drivers build config 00:03:26.356 net/nfb: not in enabled drivers build config 00:03:26.356 net/nfp: not in enabled drivers build config 00:03:26.356 net/ngbe: not in enabled drivers build config 00:03:26.356 net/null: not in enabled drivers build config 00:03:26.356 net/octeontx: not in enabled drivers build config 00:03:26.356 net/octeon_ep: not in enabled drivers build config 00:03:26.356 net/pcap: not in enabled drivers build config 00:03:26.356 net/pfe: not in enabled drivers build config 00:03:26.356 net/qede: not in enabled drivers build config 00:03:26.356 net/ring: not in enabled drivers build config 00:03:26.356 net/sfc: not in enabled drivers build config 00:03:26.356 net/softnic: not in enabled drivers build config 00:03:26.356 net/tap: not in enabled drivers build config 00:03:26.356 net/thunderx: not in enabled drivers build config 00:03:26.356 net/txgbe: not in enabled drivers build config 00:03:26.356 net/vdev_netvsc: not in enabled drivers build config 00:03:26.356 net/vhost: not in enabled drivers build config 00:03:26.356 net/virtio: not in enabled drivers build config 00:03:26.356 net/vmxnet3: not in enabled drivers build config 00:03:26.356 raw/*: missing internal dependency, "rawdev" 00:03:26.356 crypto/armv8: not in enabled drivers build config 00:03:26.356 crypto/bcmfs: not in enabled drivers build config 00:03:26.356 crypto/caam_jr: not in enabled drivers build config 00:03:26.356 crypto/ccp: not in enabled drivers build config 00:03:26.356 crypto/cnxk: not in enabled drivers build config 00:03:26.356 crypto/dpaa_sec: not in enabled drivers build config 00:03:26.356 crypto/dpaa2_sec: not in enabled drivers build config 00:03:26.356 crypto/ipsec_mb: not in enabled drivers build config 00:03:26.356 crypto/mlx5: not in enabled drivers build config 00:03:26.356 crypto/mvsam: not in enabled drivers build config 00:03:26.356 crypto/nitrox: not in enabled drivers build config 00:03:26.356 crypto/null: not in enabled drivers build config 00:03:26.356 crypto/octeontx: not in enabled drivers build config 00:03:26.356 crypto/openssl: not in enabled drivers build config 00:03:26.356 crypto/scheduler: not in enabled drivers build config 00:03:26.356 crypto/uadk: not in enabled drivers build config 00:03:26.356 crypto/virtio: not in enabled drivers build config 00:03:26.356 compress/isal: not in enabled drivers build config 00:03:26.356 compress/mlx5: not in enabled drivers build config 00:03:26.356 compress/nitrox: not in enabled drivers build config 00:03:26.356 compress/octeontx: not in enabled drivers build config 00:03:26.356 compress/zlib: not in enabled drivers build config 00:03:26.356 regex/*: missing internal dependency, "regexdev" 00:03:26.356 ml/*: missing internal dependency, "mldev" 00:03:26.356 vdpa/ifc: not in enabled drivers build config 00:03:26.356 vdpa/mlx5: not in enabled drivers build config 00:03:26.356 vdpa/nfp: not in enabled drivers build config 00:03:26.356 vdpa/sfc: not in enabled drivers build config 00:03:26.356 event/*: missing internal dependency, "eventdev" 00:03:26.356 baseband/*: missing internal dependency, "bbdev" 00:03:26.356 gpu/*: missing internal dependency, "gpudev" 00:03:26.356 00:03:26.356 00:03:26.356 Build targets in project: 85 00:03:26.356 00:03:26.356 DPDK 24.03.0 00:03:26.356 00:03:26.356 User defined options 00:03:26.356 buildtype : debug 00:03:26.356 default_library : shared 00:03:26.356 libdir : lib 00:03:26.356 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:26.356 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:26.356 c_link_args : 00:03:26.356 cpu_instruction_set: native 00:03:26.356 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:03:26.356 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:03:26.356 enable_docs : false 00:03:26.356 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:26.356 enable_kmods : false 00:03:26.356 max_lcores : 128 00:03:26.356 tests : false 00:03:26.356 00:03:26.356 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:26.933 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:03:26.933 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:26.933 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:26.933 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:26.933 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:26.933 [5/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:26.933 [6/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:26.933 [7/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:26.933 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:26.933 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:26.933 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:26.933 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:26.933 [12/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:26.933 [13/268] Linking static target lib/librte_kvargs.a 00:03:26.933 [14/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:26.933 [15/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:26.933 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:26.933 [17/268] Linking static target lib/librte_log.a 00:03:26.933 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:26.933 [19/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:27.192 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:27.192 [21/268] Linking static target lib/librte_pci.a 00:03:27.192 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:27.192 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:27.192 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:27.192 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:27.451 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:27.451 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:27.452 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:27.452 [29/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:27.452 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:27.452 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:27.452 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:27.452 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:27.452 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:27.452 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:27.452 [36/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:27.452 [37/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:27.452 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:27.452 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:27.452 [40/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:27.452 [41/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:27.452 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:27.452 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:27.452 [44/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:27.452 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:27.452 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:27.452 [47/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:27.452 [48/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.452 [49/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:27.452 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:27.452 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:27.452 [52/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:27.452 [53/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:27.452 [54/268] Linking static target lib/librte_ring.a 00:03:27.452 [55/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:27.452 [56/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:27.452 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:27.452 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:27.452 [59/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:27.452 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:27.452 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:27.452 [62/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:27.452 [63/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:27.452 [64/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:27.452 [65/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:27.452 [66/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:27.452 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:27.452 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:27.452 [69/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:27.452 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:27.452 [71/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:27.452 [72/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:27.452 [73/268] Linking static target lib/librte_telemetry.a 00:03:27.452 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:27.452 [75/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:27.452 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:27.452 [77/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:27.452 [78/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:27.452 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:27.452 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:27.452 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:27.452 [82/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.452 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:27.452 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:27.452 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:27.452 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:27.452 [87/268] Linking static target lib/librte_meter.a 00:03:27.452 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:27.452 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:27.452 [90/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:27.452 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:27.452 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:27.452 [93/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:27.452 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:27.452 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:27.452 [96/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:27.717 [97/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:27.717 [98/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:27.717 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:27.717 [100/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:27.717 [101/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:27.717 [102/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:27.717 [103/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:27.717 [104/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:27.717 [105/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:27.717 [106/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:27.717 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:27.717 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:27.717 [109/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:27.717 [110/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:27.717 [111/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:27.717 [112/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:27.717 [113/268] Linking static target lib/librte_mempool.a 00:03:27.717 [114/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:27.717 [115/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:27.717 [116/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:27.717 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:27.717 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:27.717 [119/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:27.717 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:27.717 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:27.717 [122/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:27.717 [123/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:27.717 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:27.717 [125/268] Linking static target lib/librte_cmdline.a 00:03:27.717 [126/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:27.717 [127/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:27.717 [128/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:27.717 [129/268] Linking static target lib/librte_rcu.a 00:03:27.717 [130/268] Linking static target lib/librte_net.a 00:03:27.717 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:27.717 [132/268] Linking static target lib/librte_eal.a 00:03:27.717 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:27.717 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:27.717 [135/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.717 [136/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.717 [137/268] Linking target lib/librte_log.so.24.1 00:03:27.717 [138/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.717 [139/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:28.034 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:28.034 [141/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:28.034 [142/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:28.034 [143/268] Linking static target lib/librte_mbuf.a 00:03:28.034 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:28.034 [145/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:28.034 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:28.034 [147/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:28.034 [148/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:28.034 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:28.034 [150/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:28.034 [151/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:28.034 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:28.034 [153/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:28.034 [154/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:28.034 [155/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:28.034 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:28.034 [157/268] Linking static target lib/librte_dmadev.a 00:03:28.034 [158/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:28.034 [159/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:28.034 [160/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:28.034 [161/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:28.034 [162/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.034 [163/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:28.034 [164/268] Linking target lib/librte_kvargs.so.24.1 00:03:28.034 [165/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.034 [166/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:28.034 [167/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:28.034 [168/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.034 [169/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:28.034 [170/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:28.034 [171/268] Linking target lib/librte_telemetry.so.24.1 00:03:28.034 [172/268] Linking static target lib/librte_reorder.a 00:03:28.034 [173/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:28.034 [174/268] Linking static target lib/librte_compressdev.a 00:03:28.034 [175/268] Linking static target lib/librte_timer.a 00:03:28.034 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:28.034 [177/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:28.034 [178/268] Linking static target lib/librte_power.a 00:03:28.034 [179/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:28.034 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:28.034 [181/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:28.034 [182/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:28.034 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:28.034 [184/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:28.034 [185/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:28.034 [186/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:28.035 [187/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:28.035 [188/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:28.035 [189/268] Linking static target lib/librte_hash.a 00:03:28.035 [190/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:28.294 [191/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:28.294 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:28.294 [193/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:28.294 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:28.294 [195/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:28.294 [196/268] Linking static target lib/librte_security.a 00:03:28.294 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:28.294 [198/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:28.294 [199/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:28.294 [200/268] Linking static target drivers/librte_bus_vdev.a 00:03:28.294 [201/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:28.294 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:28.294 [203/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:28.294 [204/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:28.294 [205/268] Linking static target drivers/librte_mempool_ring.a 00:03:28.294 [206/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:28.294 [207/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:28.294 [208/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:28.294 [209/268] Linking static target drivers/librte_bus_pci.a 00:03:28.294 [210/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.294 [211/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:28.554 [212/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.554 [213/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.554 [214/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:28.554 [215/268] Linking static target lib/librte_cryptodev.a 00:03:28.554 [216/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.554 [217/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.554 [218/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.554 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:28.554 [220/268] Linking static target lib/librte_ethdev.a 00:03:28.554 [221/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.813 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.813 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.813 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:28.813 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.072 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.072 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.009 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:30.009 [229/268] Linking static target lib/librte_vhost.a 00:03:30.268 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.645 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.919 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.489 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.489 [234/268] Linking target lib/librte_eal.so.24.1 00:03:37.749 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:37.749 [236/268] Linking target lib/librte_ring.so.24.1 00:03:37.749 [237/268] Linking target lib/librte_pci.so.24.1 00:03:37.749 [238/268] Linking target lib/librte_timer.so.24.1 00:03:37.749 [239/268] Linking target lib/librte_meter.so.24.1 00:03:37.749 [240/268] Linking target lib/librte_dmadev.so.24.1 00:03:37.749 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:37.749 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:38.007 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:38.007 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:38.007 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:38.007 [246/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:38.007 [247/268] Linking target lib/librte_rcu.so.24.1 00:03:38.007 [248/268] Linking target lib/librte_mempool.so.24.1 00:03:38.007 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:38.007 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:38.007 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:38.007 [252/268] Linking target lib/librte_mbuf.so.24.1 00:03:38.008 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:38.267 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:38.267 [255/268] Linking target lib/librte_net.so.24.1 00:03:38.267 [256/268] Linking target lib/librte_compressdev.so.24.1 00:03:38.267 [257/268] Linking target lib/librte_reorder.so.24.1 00:03:38.267 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:03:38.527 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:38.527 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:38.527 [261/268] Linking target lib/librte_security.so.24.1 00:03:38.527 [262/268] Linking target lib/librte_hash.so.24.1 00:03:38.527 [263/268] Linking target lib/librte_cmdline.so.24.1 00:03:38.527 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:38.527 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:38.527 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:38.787 [267/268] Linking target lib/librte_power.so.24.1 00:03:38.787 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:38.787 INFO: autodetecting backend as ninja 00:03:38.787 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:03:39.725 CC lib/ut/ut.o 00:03:39.725 CC lib/log/log.o 00:03:39.725 CC lib/log/log_deprecated.o 00:03:39.725 CC lib/log/log_flags.o 00:03:39.725 CC lib/ut_mock/mock.o 00:03:39.725 LIB libspdk_ut.a 00:03:39.725 SO libspdk_ut.so.2.0 00:03:39.725 LIB libspdk_log.a 00:03:39.725 LIB libspdk_ut_mock.a 00:03:39.985 SO libspdk_log.so.7.0 00:03:39.985 SO libspdk_ut_mock.so.6.0 00:03:39.985 SYMLINK libspdk_ut.so 00:03:39.985 SYMLINK libspdk_ut_mock.so 00:03:39.985 SYMLINK libspdk_log.so 00:03:40.244 CC lib/dma/dma.o 00:03:40.244 CC lib/ioat/ioat.o 00:03:40.244 CC lib/util/base64.o 00:03:40.244 CC lib/util/bit_array.o 00:03:40.244 CC lib/util/cpuset.o 00:03:40.244 CC lib/util/crc32.o 00:03:40.244 CC lib/util/crc16.o 00:03:40.244 CC lib/util/crc32c.o 00:03:40.244 CC lib/util/crc32_ieee.o 00:03:40.244 CC lib/util/crc64.o 00:03:40.244 CC lib/util/fd.o 00:03:40.244 CC lib/util/dif.o 00:03:40.244 CC lib/util/file.o 00:03:40.244 CC lib/util/hexlify.o 00:03:40.244 CXX lib/trace_parser/trace.o 00:03:40.244 CC lib/util/iov.o 00:03:40.244 CC lib/util/math.o 00:03:40.244 CC lib/util/pipe.o 00:03:40.244 CC lib/util/strerror_tls.o 00:03:40.244 CC lib/util/string.o 00:03:40.244 CC lib/util/uuid.o 00:03:40.244 CC lib/util/fd_group.o 00:03:40.244 CC lib/util/xor.o 00:03:40.244 CC lib/util/zipf.o 00:03:40.502 CC lib/vfio_user/host/vfio_user_pci.o 00:03:40.502 CC lib/vfio_user/host/vfio_user.o 00:03:40.502 LIB libspdk_dma.a 00:03:40.502 SO libspdk_dma.so.4.0 00:03:40.502 LIB libspdk_ioat.a 00:03:40.502 SYMLINK libspdk_dma.so 00:03:40.502 SO libspdk_ioat.so.7.0 00:03:40.502 SYMLINK libspdk_ioat.so 00:03:40.502 LIB libspdk_vfio_user.a 00:03:40.502 SO libspdk_vfio_user.so.5.0 00:03:40.761 LIB libspdk_util.a 00:03:40.761 SYMLINK libspdk_vfio_user.so 00:03:40.761 SO libspdk_util.so.9.1 00:03:40.761 SYMLINK libspdk_util.so 00:03:41.021 LIB libspdk_trace_parser.a 00:03:41.021 SO libspdk_trace_parser.so.5.0 00:03:41.021 SYMLINK libspdk_trace_parser.so 00:03:41.021 CC lib/vmd/vmd.o 00:03:41.021 CC lib/vmd/led.o 00:03:41.021 CC lib/rdma_utils/rdma_utils.o 00:03:41.021 CC lib/json/json_parse.o 00:03:41.021 CC lib/json/json_util.o 00:03:41.021 CC lib/idxd/idxd.o 00:03:41.021 CC lib/json/json_write.o 00:03:41.021 CC lib/idxd/idxd_user.o 00:03:41.021 CC lib/idxd/idxd_kernel.o 00:03:41.021 CC lib/conf/conf.o 00:03:41.021 CC lib/rdma_provider/common.o 00:03:41.021 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:41.021 CC lib/env_dpdk/env.o 00:03:41.021 CC lib/env_dpdk/memory.o 00:03:41.021 CC lib/env_dpdk/pci.o 00:03:41.021 CC lib/env_dpdk/init.o 00:03:41.021 CC lib/env_dpdk/threads.o 00:03:41.021 CC lib/env_dpdk/pci_ioat.o 00:03:41.021 CC lib/env_dpdk/pci_virtio.o 00:03:41.021 CC lib/env_dpdk/pci_vmd.o 00:03:41.021 CC lib/env_dpdk/pci_idxd.o 00:03:41.021 CC lib/env_dpdk/sigbus_handler.o 00:03:41.021 CC lib/env_dpdk/pci_event.o 00:03:41.021 CC lib/env_dpdk/pci_dpdk.o 00:03:41.021 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:41.281 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:41.281 LIB libspdk_rdma_provider.a 00:03:41.281 LIB libspdk_conf.a 00:03:41.281 LIB libspdk_rdma_utils.a 00:03:41.281 SO libspdk_rdma_provider.so.6.0 00:03:41.281 SO libspdk_conf.so.6.0 00:03:41.281 LIB libspdk_json.a 00:03:41.281 SO libspdk_rdma_utils.so.1.0 00:03:41.540 SYMLINK libspdk_conf.so 00:03:41.540 SYMLINK libspdk_rdma_provider.so 00:03:41.540 SO libspdk_json.so.6.0 00:03:41.540 SYMLINK libspdk_rdma_utils.so 00:03:41.540 SYMLINK libspdk_json.so 00:03:41.540 LIB libspdk_idxd.a 00:03:41.540 LIB libspdk_vmd.a 00:03:41.540 SO libspdk_idxd.so.12.0 00:03:41.540 SO libspdk_vmd.so.6.0 00:03:41.800 SYMLINK libspdk_idxd.so 00:03:41.800 SYMLINK libspdk_vmd.so 00:03:41.800 CC lib/jsonrpc/jsonrpc_server.o 00:03:41.800 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:41.800 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:41.800 CC lib/jsonrpc/jsonrpc_client.o 00:03:42.060 LIB libspdk_jsonrpc.a 00:03:42.060 SO libspdk_jsonrpc.so.6.0 00:03:42.060 SYMLINK libspdk_jsonrpc.so 00:03:42.060 LIB libspdk_env_dpdk.a 00:03:42.060 SO libspdk_env_dpdk.so.14.1 00:03:42.319 SYMLINK libspdk_env_dpdk.so 00:03:42.319 CC lib/rpc/rpc.o 00:03:42.579 LIB libspdk_rpc.a 00:03:42.579 SO libspdk_rpc.so.6.0 00:03:42.579 SYMLINK libspdk_rpc.so 00:03:42.839 CC lib/notify/notify_rpc.o 00:03:42.839 CC lib/notify/notify.o 00:03:43.099 CC lib/trace/trace.o 00:03:43.099 CC lib/keyring/keyring_rpc.o 00:03:43.099 CC lib/keyring/keyring.o 00:03:43.099 CC lib/trace/trace_flags.o 00:03:43.099 CC lib/trace/trace_rpc.o 00:03:43.099 LIB libspdk_notify.a 00:03:43.099 SO libspdk_notify.so.6.0 00:03:43.099 SYMLINK libspdk_notify.so 00:03:43.099 LIB libspdk_trace.a 00:03:43.099 LIB libspdk_keyring.a 00:03:43.099 SO libspdk_keyring.so.1.0 00:03:43.359 SO libspdk_trace.so.10.0 00:03:43.359 SYMLINK libspdk_keyring.so 00:03:43.359 SYMLINK libspdk_trace.so 00:03:43.619 CC lib/thread/thread.o 00:03:43.619 CC lib/thread/iobuf.o 00:03:43.619 CC lib/sock/sock.o 00:03:43.619 CC lib/sock/sock_rpc.o 00:03:43.878 LIB libspdk_sock.a 00:03:43.878 SO libspdk_sock.so.10.0 00:03:43.878 SYMLINK libspdk_sock.so 00:03:44.446 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:44.446 CC lib/nvme/nvme_ctrlr.o 00:03:44.446 CC lib/nvme/nvme_ns_cmd.o 00:03:44.446 CC lib/nvme/nvme_fabric.o 00:03:44.446 CC lib/nvme/nvme_ns.o 00:03:44.446 CC lib/nvme/nvme_qpair.o 00:03:44.446 CC lib/nvme/nvme_pcie_common.o 00:03:44.446 CC lib/nvme/nvme.o 00:03:44.446 CC lib/nvme/nvme_pcie.o 00:03:44.446 CC lib/nvme/nvme_quirks.o 00:03:44.446 CC lib/nvme/nvme_transport.o 00:03:44.446 CC lib/nvme/nvme_discovery.o 00:03:44.446 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:44.446 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:44.446 CC lib/nvme/nvme_tcp.o 00:03:44.446 CC lib/nvme/nvme_opal.o 00:03:44.446 CC lib/nvme/nvme_io_msg.o 00:03:44.446 CC lib/nvme/nvme_poll_group.o 00:03:44.446 CC lib/nvme/nvme_zns.o 00:03:44.446 CC lib/nvme/nvme_stubs.o 00:03:44.446 CC lib/nvme/nvme_auth.o 00:03:44.446 CC lib/nvme/nvme_cuse.o 00:03:44.446 CC lib/nvme/nvme_vfio_user.o 00:03:44.446 CC lib/nvme/nvme_rdma.o 00:03:44.745 LIB libspdk_thread.a 00:03:44.745 SO libspdk_thread.so.10.1 00:03:44.745 SYMLINK libspdk_thread.so 00:03:45.023 CC lib/vfu_tgt/tgt_endpoint.o 00:03:45.023 CC lib/vfu_tgt/tgt_rpc.o 00:03:45.023 CC lib/init/json_config.o 00:03:45.023 CC lib/init/rpc.o 00:03:45.023 CC lib/init/subsystem.o 00:03:45.023 CC lib/init/subsystem_rpc.o 00:03:45.023 CC lib/blob/request.o 00:03:45.023 CC lib/blob/blobstore.o 00:03:45.023 CC lib/accel/accel.o 00:03:45.023 CC lib/blob/zeroes.o 00:03:45.023 CC lib/accel/accel_rpc.o 00:03:45.023 CC lib/blob/blob_bs_dev.o 00:03:45.023 CC lib/accel/accel_sw.o 00:03:45.023 CC lib/virtio/virtio.o 00:03:45.023 CC lib/virtio/virtio_vfio_user.o 00:03:45.023 CC lib/virtio/virtio_vhost_user.o 00:03:45.023 CC lib/virtio/virtio_pci.o 00:03:45.330 LIB libspdk_init.a 00:03:45.330 SO libspdk_init.so.5.0 00:03:45.330 LIB libspdk_vfu_tgt.a 00:03:45.330 SO libspdk_vfu_tgt.so.3.0 00:03:45.330 LIB libspdk_virtio.a 00:03:45.330 SYMLINK libspdk_init.so 00:03:45.330 SYMLINK libspdk_vfu_tgt.so 00:03:45.330 SO libspdk_virtio.so.7.0 00:03:45.330 SYMLINK libspdk_virtio.so 00:03:45.590 CC lib/event/app.o 00:03:45.590 CC lib/event/reactor.o 00:03:45.590 CC lib/event/log_rpc.o 00:03:45.590 CC lib/event/app_rpc.o 00:03:45.590 CC lib/event/scheduler_static.o 00:03:45.850 LIB libspdk_accel.a 00:03:45.850 SO libspdk_accel.so.15.1 00:03:45.850 SYMLINK libspdk_accel.so 00:03:45.850 LIB libspdk_event.a 00:03:45.850 LIB libspdk_nvme.a 00:03:45.850 SO libspdk_event.so.14.0 00:03:45.850 SO libspdk_nvme.so.13.1 00:03:46.110 SYMLINK libspdk_event.so 00:03:46.110 CC lib/bdev/bdev.o 00:03:46.110 CC lib/bdev/bdev_rpc.o 00:03:46.110 CC lib/bdev/bdev_zone.o 00:03:46.110 CC lib/bdev/part.o 00:03:46.110 CC lib/bdev/scsi_nvme.o 00:03:46.110 SYMLINK libspdk_nvme.so 00:03:47.050 LIB libspdk_blob.a 00:03:47.050 SO libspdk_blob.so.11.0 00:03:47.050 SYMLINK libspdk_blob.so 00:03:47.620 CC lib/blobfs/blobfs.o 00:03:47.620 CC lib/blobfs/tree.o 00:03:47.620 CC lib/lvol/lvol.o 00:03:47.879 LIB libspdk_bdev.a 00:03:47.879 SO libspdk_bdev.so.15.1 00:03:48.139 LIB libspdk_blobfs.a 00:03:48.139 SYMLINK libspdk_bdev.so 00:03:48.139 SO libspdk_blobfs.so.10.0 00:03:48.139 LIB libspdk_lvol.a 00:03:48.139 SYMLINK libspdk_blobfs.so 00:03:48.139 SO libspdk_lvol.so.10.0 00:03:48.139 SYMLINK libspdk_lvol.so 00:03:48.398 CC lib/nbd/nbd_rpc.o 00:03:48.398 CC lib/nbd/nbd.o 00:03:48.398 CC lib/ublk/ublk.o 00:03:48.398 CC lib/ublk/ublk_rpc.o 00:03:48.398 CC lib/scsi/dev.o 00:03:48.398 CC lib/scsi/lun.o 00:03:48.398 CC lib/scsi/port.o 00:03:48.398 CC lib/scsi/scsi.o 00:03:48.398 CC lib/scsi/scsi_bdev.o 00:03:48.398 CC lib/scsi/scsi_rpc.o 00:03:48.398 CC lib/scsi/scsi_pr.o 00:03:48.398 CC lib/scsi/task.o 00:03:48.398 CC lib/ftl/ftl_core.o 00:03:48.398 CC lib/nvmf/ctrlr.o 00:03:48.398 CC lib/ftl/ftl_init.o 00:03:48.398 CC lib/nvmf/ctrlr_discovery.o 00:03:48.398 CC lib/ftl/ftl_layout.o 00:03:48.398 CC lib/nvmf/ctrlr_bdev.o 00:03:48.398 CC lib/ftl/ftl_debug.o 00:03:48.398 CC lib/nvmf/subsystem.o 00:03:48.398 CC lib/nvmf/nvmf.o 00:03:48.398 CC lib/ftl/ftl_io.o 00:03:48.398 CC lib/ftl/ftl_sb.o 00:03:48.398 CC lib/nvmf/transport.o 00:03:48.398 CC lib/nvmf/nvmf_rpc.o 00:03:48.398 CC lib/ftl/ftl_l2p.o 00:03:48.398 CC lib/nvmf/tcp.o 00:03:48.398 CC lib/nvmf/mdns_server.o 00:03:48.398 CC lib/ftl/ftl_nv_cache.o 00:03:48.398 CC lib/ftl/ftl_band.o 00:03:48.398 CC lib/ftl/ftl_l2p_flat.o 00:03:48.398 CC lib/nvmf/stubs.o 00:03:48.398 CC lib/nvmf/vfio_user.o 00:03:48.398 CC lib/ftl/ftl_band_ops.o 00:03:48.398 CC lib/nvmf/rdma.o 00:03:48.398 CC lib/nvmf/auth.o 00:03:48.398 CC lib/ftl/ftl_writer.o 00:03:48.398 CC lib/ftl/ftl_rq.o 00:03:48.398 CC lib/ftl/ftl_reloc.o 00:03:48.398 CC lib/ftl/ftl_l2p_cache.o 00:03:48.398 CC lib/ftl/ftl_p2l.o 00:03:48.398 CC lib/ftl/mngt/ftl_mngt.o 00:03:48.398 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:48.398 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:48.398 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:48.398 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:48.398 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:48.398 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:48.398 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:48.398 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:48.398 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:48.398 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:48.398 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:48.398 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:48.398 CC lib/ftl/utils/ftl_conf.o 00:03:48.398 CC lib/ftl/utils/ftl_md.o 00:03:48.398 CC lib/ftl/utils/ftl_bitmap.o 00:03:48.398 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:48.398 CC lib/ftl/utils/ftl_mempool.o 00:03:48.398 CC lib/ftl/utils/ftl_property.o 00:03:48.398 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:48.398 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:48.398 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:48.398 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:48.398 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:48.398 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:48.398 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:48.398 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:48.398 CC lib/ftl/base/ftl_base_bdev.o 00:03:48.398 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:48.398 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:48.398 CC lib/ftl/base/ftl_base_dev.o 00:03:48.398 CC lib/ftl/ftl_trace.o 00:03:48.964 LIB libspdk_nbd.a 00:03:48.964 SO libspdk_nbd.so.7.0 00:03:48.964 LIB libspdk_scsi.a 00:03:48.964 SYMLINK libspdk_nbd.so 00:03:48.964 SO libspdk_scsi.so.9.0 00:03:48.964 LIB libspdk_ublk.a 00:03:48.964 SYMLINK libspdk_scsi.so 00:03:48.964 SO libspdk_ublk.so.3.0 00:03:49.222 SYMLINK libspdk_ublk.so 00:03:49.222 LIB libspdk_ftl.a 00:03:49.222 CC lib/vhost/vhost.o 00:03:49.222 CC lib/vhost/vhost_scsi.o 00:03:49.222 CC lib/vhost/vhost_rpc.o 00:03:49.222 CC lib/iscsi/conn.o 00:03:49.222 CC lib/iscsi/init_grp.o 00:03:49.222 CC lib/vhost/vhost_blk.o 00:03:49.222 CC lib/iscsi/iscsi.o 00:03:49.222 CC lib/vhost/rte_vhost_user.o 00:03:49.222 CC lib/iscsi/md5.o 00:03:49.222 CC lib/iscsi/param.o 00:03:49.222 CC lib/iscsi/portal_grp.o 00:03:49.222 CC lib/iscsi/tgt_node.o 00:03:49.222 CC lib/iscsi/iscsi_subsystem.o 00:03:49.222 CC lib/iscsi/iscsi_rpc.o 00:03:49.222 CC lib/iscsi/task.o 00:03:49.481 SO libspdk_ftl.so.9.0 00:03:49.742 SYMLINK libspdk_ftl.so 00:03:50.001 LIB libspdk_nvmf.a 00:03:50.001 LIB libspdk_vhost.a 00:03:50.001 SO libspdk_nvmf.so.19.0 00:03:50.261 SO libspdk_vhost.so.8.0 00:03:50.261 SYMLINK libspdk_vhost.so 00:03:50.261 LIB libspdk_iscsi.a 00:03:50.261 SYMLINK libspdk_nvmf.so 00:03:50.261 SO libspdk_iscsi.so.8.0 00:03:50.520 SYMLINK libspdk_iscsi.so 00:03:50.780 CC module/vfu_device/vfu_virtio.o 00:03:50.780 CC module/vfu_device/vfu_virtio_blk.o 00:03:50.780 CC module/vfu_device/vfu_virtio_rpc.o 00:03:50.780 CC module/vfu_device/vfu_virtio_scsi.o 00:03:51.045 CC module/env_dpdk/env_dpdk_rpc.o 00:03:51.045 CC module/blob/bdev/blob_bdev.o 00:03:51.045 CC module/accel/error/accel_error.o 00:03:51.045 CC module/accel/error/accel_error_rpc.o 00:03:51.045 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:51.045 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:51.045 CC module/keyring/file/keyring_rpc.o 00:03:51.045 CC module/keyring/file/keyring.o 00:03:51.045 CC module/keyring/linux/keyring.o 00:03:51.045 CC module/keyring/linux/keyring_rpc.o 00:03:51.045 CC module/scheduler/gscheduler/gscheduler.o 00:03:51.045 CC module/accel/dsa/accel_dsa.o 00:03:51.045 CC module/accel/dsa/accel_dsa_rpc.o 00:03:51.045 CC module/accel/iaa/accel_iaa_rpc.o 00:03:51.045 CC module/accel/iaa/accel_iaa.o 00:03:51.045 CC module/accel/ioat/accel_ioat.o 00:03:51.045 LIB libspdk_env_dpdk_rpc.a 00:03:51.045 CC module/accel/ioat/accel_ioat_rpc.o 00:03:51.045 CC module/sock/posix/posix.o 00:03:51.045 SO libspdk_env_dpdk_rpc.so.6.0 00:03:51.045 SYMLINK libspdk_env_dpdk_rpc.so 00:03:51.304 LIB libspdk_keyring_linux.a 00:03:51.304 LIB libspdk_scheduler_gscheduler.a 00:03:51.304 LIB libspdk_keyring_file.a 00:03:51.304 LIB libspdk_accel_error.a 00:03:51.304 LIB libspdk_scheduler_dpdk_governor.a 00:03:51.304 SO libspdk_keyring_linux.so.1.0 00:03:51.304 SO libspdk_scheduler_gscheduler.so.4.0 00:03:51.304 LIB libspdk_accel_ioat.a 00:03:51.304 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:51.304 SO libspdk_accel_error.so.2.0 00:03:51.304 SO libspdk_keyring_file.so.1.0 00:03:51.304 LIB libspdk_scheduler_dynamic.a 00:03:51.304 LIB libspdk_accel_iaa.a 00:03:51.304 SYMLINK libspdk_keyring_linux.so 00:03:51.304 SO libspdk_accel_ioat.so.6.0 00:03:51.304 SYMLINK libspdk_scheduler_gscheduler.so 00:03:51.304 SO libspdk_accel_iaa.so.3.0 00:03:51.304 SO libspdk_scheduler_dynamic.so.4.0 00:03:51.304 LIB libspdk_blob_bdev.a 00:03:51.304 SYMLINK libspdk_keyring_file.so 00:03:51.304 LIB libspdk_accel_dsa.a 00:03:51.304 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:51.304 SYMLINK libspdk_accel_error.so 00:03:51.304 SYMLINK libspdk_accel_ioat.so 00:03:51.304 SO libspdk_blob_bdev.so.11.0 00:03:51.304 SO libspdk_accel_dsa.so.5.0 00:03:51.304 SYMLINK libspdk_accel_iaa.so 00:03:51.304 SYMLINK libspdk_scheduler_dynamic.so 00:03:51.304 SYMLINK libspdk_blob_bdev.so 00:03:51.304 SYMLINK libspdk_accel_dsa.so 00:03:51.304 LIB libspdk_vfu_device.a 00:03:51.563 SO libspdk_vfu_device.so.3.0 00:03:51.563 SYMLINK libspdk_vfu_device.so 00:03:51.563 LIB libspdk_sock_posix.a 00:03:51.563 SO libspdk_sock_posix.so.6.0 00:03:51.820 SYMLINK libspdk_sock_posix.so 00:03:51.820 CC module/bdev/lvol/vbdev_lvol.o 00:03:51.820 CC module/bdev/error/vbdev_error.o 00:03:51.820 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:51.820 CC module/bdev/error/vbdev_error_rpc.o 00:03:51.820 CC module/blobfs/bdev/blobfs_bdev.o 00:03:51.820 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:51.820 CC module/bdev/delay/vbdev_delay.o 00:03:51.820 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:51.820 CC module/bdev/nvme/bdev_nvme.o 00:03:51.820 CC module/bdev/nvme/nvme_rpc.o 00:03:51.820 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:51.820 CC module/bdev/null/bdev_null.o 00:03:51.820 CC module/bdev/null/bdev_null_rpc.o 00:03:51.820 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:51.820 CC module/bdev/nvme/bdev_mdns_client.o 00:03:51.820 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:51.820 CC module/bdev/nvme/vbdev_opal.o 00:03:51.820 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:51.820 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:51.820 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:51.820 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:51.820 CC module/bdev/aio/bdev_aio.o 00:03:51.820 CC module/bdev/aio/bdev_aio_rpc.o 00:03:51.820 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:51.820 CC module/bdev/gpt/gpt.o 00:03:51.820 CC module/bdev/malloc/bdev_malloc.o 00:03:51.820 CC module/bdev/gpt/vbdev_gpt.o 00:03:51.820 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:51.820 CC module/bdev/ftl/bdev_ftl.o 00:03:51.820 CC module/bdev/passthru/vbdev_passthru.o 00:03:51.820 CC module/bdev/raid/bdev_raid_rpc.o 00:03:51.820 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:51.820 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:51.820 CC module/bdev/raid/bdev_raid.o 00:03:51.820 CC module/bdev/raid/bdev_raid_sb.o 00:03:51.820 CC module/bdev/raid/raid0.o 00:03:51.820 CC module/bdev/raid/concat.o 00:03:51.820 CC module/bdev/raid/raid1.o 00:03:51.820 CC module/bdev/split/vbdev_split.o 00:03:51.820 CC module/bdev/split/vbdev_split_rpc.o 00:03:51.820 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:51.820 CC module/bdev/iscsi/bdev_iscsi.o 00:03:52.079 LIB libspdk_blobfs_bdev.a 00:03:52.079 SO libspdk_blobfs_bdev.so.6.0 00:03:52.079 LIB libspdk_bdev_error.a 00:03:52.079 LIB libspdk_bdev_split.a 00:03:52.079 SO libspdk_bdev_error.so.6.0 00:03:52.079 LIB libspdk_bdev_null.a 00:03:52.079 SYMLINK libspdk_blobfs_bdev.so 00:03:52.079 LIB libspdk_bdev_gpt.a 00:03:52.079 LIB libspdk_bdev_aio.a 00:03:52.079 LIB libspdk_bdev_passthru.a 00:03:52.079 SO libspdk_bdev_split.so.6.0 00:03:52.079 LIB libspdk_bdev_delay.a 00:03:52.079 SO libspdk_bdev_passthru.so.6.0 00:03:52.079 SO libspdk_bdev_aio.so.6.0 00:03:52.079 SO libspdk_bdev_null.so.6.0 00:03:52.079 LIB libspdk_bdev_ftl.a 00:03:52.079 SO libspdk_bdev_gpt.so.6.0 00:03:52.079 SYMLINK libspdk_bdev_error.so 00:03:52.079 LIB libspdk_bdev_malloc.a 00:03:52.079 LIB libspdk_bdev_zone_block.a 00:03:52.336 SO libspdk_bdev_delay.so.6.0 00:03:52.336 SYMLINK libspdk_bdev_split.so 00:03:52.336 SO libspdk_bdev_ftl.so.6.0 00:03:52.336 SO libspdk_bdev_zone_block.so.6.0 00:03:52.336 SO libspdk_bdev_malloc.so.6.0 00:03:52.337 LIB libspdk_bdev_iscsi.a 00:03:52.337 SYMLINK libspdk_bdev_aio.so 00:03:52.337 SYMLINK libspdk_bdev_null.so 00:03:52.337 SYMLINK libspdk_bdev_passthru.so 00:03:52.337 SYMLINK libspdk_bdev_gpt.so 00:03:52.337 LIB libspdk_bdev_lvol.a 00:03:52.337 SYMLINK libspdk_bdev_delay.so 00:03:52.337 SYMLINK libspdk_bdev_ftl.so 00:03:52.337 SO libspdk_bdev_iscsi.so.6.0 00:03:52.337 SYMLINK libspdk_bdev_zone_block.so 00:03:52.337 SYMLINK libspdk_bdev_malloc.so 00:03:52.337 SO libspdk_bdev_lvol.so.6.0 00:03:52.337 LIB libspdk_bdev_virtio.a 00:03:52.337 SYMLINK libspdk_bdev_iscsi.so 00:03:52.337 SO libspdk_bdev_virtio.so.6.0 00:03:52.337 SYMLINK libspdk_bdev_lvol.so 00:03:52.337 SYMLINK libspdk_bdev_virtio.so 00:03:52.595 LIB libspdk_bdev_raid.a 00:03:52.595 SO libspdk_bdev_raid.so.6.0 00:03:52.853 SYMLINK libspdk_bdev_raid.so 00:03:53.419 LIB libspdk_bdev_nvme.a 00:03:53.419 SO libspdk_bdev_nvme.so.7.0 00:03:53.419 SYMLINK libspdk_bdev_nvme.so 00:03:54.356 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:54.356 CC module/event/subsystems/sock/sock.o 00:03:54.356 CC module/event/subsystems/vmd/vmd.o 00:03:54.356 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:54.356 CC module/event/subsystems/scheduler/scheduler.o 00:03:54.356 CC module/event/subsystems/keyring/keyring.o 00:03:54.356 CC module/event/subsystems/iobuf/iobuf.o 00:03:54.356 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:54.356 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:54.356 LIB libspdk_event_vhost_blk.a 00:03:54.356 SO libspdk_event_vhost_blk.so.3.0 00:03:54.356 LIB libspdk_event_vmd.a 00:03:54.356 LIB libspdk_event_sock.a 00:03:54.356 LIB libspdk_event_keyring.a 00:03:54.356 LIB libspdk_event_scheduler.a 00:03:54.356 LIB libspdk_event_vfu_tgt.a 00:03:54.356 SO libspdk_event_vmd.so.6.0 00:03:54.356 LIB libspdk_event_iobuf.a 00:03:54.356 SO libspdk_event_sock.so.5.0 00:03:54.356 SO libspdk_event_keyring.so.1.0 00:03:54.356 SYMLINK libspdk_event_vhost_blk.so 00:03:54.356 SO libspdk_event_scheduler.so.4.0 00:03:54.357 SO libspdk_event_vfu_tgt.so.3.0 00:03:54.357 SO libspdk_event_iobuf.so.3.0 00:03:54.357 SYMLINK libspdk_event_sock.so 00:03:54.357 SYMLINK libspdk_event_vmd.so 00:03:54.357 SYMLINK libspdk_event_keyring.so 00:03:54.357 SYMLINK libspdk_event_scheduler.so 00:03:54.357 SYMLINK libspdk_event_vfu_tgt.so 00:03:54.357 SYMLINK libspdk_event_iobuf.so 00:03:54.616 CC module/event/subsystems/accel/accel.o 00:03:54.874 LIB libspdk_event_accel.a 00:03:54.874 SO libspdk_event_accel.so.6.0 00:03:54.874 SYMLINK libspdk_event_accel.so 00:03:55.443 CC module/event/subsystems/bdev/bdev.o 00:03:55.443 LIB libspdk_event_bdev.a 00:03:55.443 SO libspdk_event_bdev.so.6.0 00:03:55.443 SYMLINK libspdk_event_bdev.so 00:03:55.703 CC module/event/subsystems/ublk/ublk.o 00:03:55.703 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:55.703 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:55.703 CC module/event/subsystems/nbd/nbd.o 00:03:55.703 CC module/event/subsystems/scsi/scsi.o 00:03:55.963 LIB libspdk_event_ublk.a 00:03:55.963 LIB libspdk_event_nbd.a 00:03:55.963 LIB libspdk_event_scsi.a 00:03:55.963 SO libspdk_event_ublk.so.3.0 00:03:55.963 SO libspdk_event_nbd.so.6.0 00:03:55.963 SO libspdk_event_scsi.so.6.0 00:03:55.963 LIB libspdk_event_nvmf.a 00:03:55.963 SYMLINK libspdk_event_ublk.so 00:03:55.963 SYMLINK libspdk_event_nbd.so 00:03:55.963 SO libspdk_event_nvmf.so.6.0 00:03:55.963 SYMLINK libspdk_event_scsi.so 00:03:56.223 SYMLINK libspdk_event_nvmf.so 00:03:56.223 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:56.484 CC module/event/subsystems/iscsi/iscsi.o 00:03:56.484 LIB libspdk_event_vhost_scsi.a 00:03:56.484 SO libspdk_event_vhost_scsi.so.3.0 00:03:56.484 LIB libspdk_event_iscsi.a 00:03:56.484 SO libspdk_event_iscsi.so.6.0 00:03:56.484 SYMLINK libspdk_event_vhost_scsi.so 00:03:56.484 SYMLINK libspdk_event_iscsi.so 00:03:56.744 SO libspdk.so.6.0 00:03:56.744 SYMLINK libspdk.so 00:03:57.003 CC app/spdk_lspci/spdk_lspci.o 00:03:57.003 CXX app/trace/trace.o 00:03:57.003 CC app/trace_record/trace_record.o 00:03:57.003 CC app/spdk_nvme_identify/identify.o 00:03:57.003 CC app/spdk_top/spdk_top.o 00:03:57.003 CC app/spdk_nvme_discover/discovery_aer.o 00:03:57.003 CC test/rpc_client/rpc_client_test.o 00:03:57.003 TEST_HEADER include/spdk/accel.h 00:03:57.003 TEST_HEADER include/spdk/accel_module.h 00:03:57.003 TEST_HEADER include/spdk/barrier.h 00:03:57.003 TEST_HEADER include/spdk/bdev.h 00:03:57.003 TEST_HEADER include/spdk/assert.h 00:03:57.003 CC app/spdk_nvme_perf/perf.o 00:03:57.003 TEST_HEADER include/spdk/base64.h 00:03:57.003 TEST_HEADER include/spdk/bdev_module.h 00:03:57.003 TEST_HEADER include/spdk/bdev_zone.h 00:03:57.003 TEST_HEADER include/spdk/bit_array.h 00:03:57.003 TEST_HEADER include/spdk/bit_pool.h 00:03:57.003 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:57.003 TEST_HEADER include/spdk/blob_bdev.h 00:03:57.003 TEST_HEADER include/spdk/blobfs.h 00:03:57.003 TEST_HEADER include/spdk/blob.h 00:03:57.003 TEST_HEADER include/spdk/conf.h 00:03:57.003 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:57.003 TEST_HEADER include/spdk/config.h 00:03:57.003 TEST_HEADER include/spdk/cpuset.h 00:03:57.003 TEST_HEADER include/spdk/crc16.h 00:03:57.003 TEST_HEADER include/spdk/crc32.h 00:03:57.003 TEST_HEADER include/spdk/crc64.h 00:03:57.003 TEST_HEADER include/spdk/dma.h 00:03:57.003 TEST_HEADER include/spdk/endian.h 00:03:57.003 TEST_HEADER include/spdk/dif.h 00:03:57.003 TEST_HEADER include/spdk/env_dpdk.h 00:03:57.003 TEST_HEADER include/spdk/event.h 00:03:57.003 TEST_HEADER include/spdk/env.h 00:03:57.003 TEST_HEADER include/spdk/fd_group.h 00:03:57.003 TEST_HEADER include/spdk/fd.h 00:03:57.003 TEST_HEADER include/spdk/file.h 00:03:57.003 TEST_HEADER include/spdk/ftl.h 00:03:57.003 TEST_HEADER include/spdk/histogram_data.h 00:03:57.003 TEST_HEADER include/spdk/hexlify.h 00:03:57.003 CC app/iscsi_tgt/iscsi_tgt.o 00:03:57.003 TEST_HEADER include/spdk/gpt_spec.h 00:03:57.003 TEST_HEADER include/spdk/idxd.h 00:03:57.003 TEST_HEADER include/spdk/init.h 00:03:57.003 CC app/spdk_dd/spdk_dd.o 00:03:57.003 CC app/nvmf_tgt/nvmf_main.o 00:03:57.003 TEST_HEADER include/spdk/ioat.h 00:03:57.003 TEST_HEADER include/spdk/idxd_spec.h 00:03:57.003 TEST_HEADER include/spdk/ioat_spec.h 00:03:57.003 TEST_HEADER include/spdk/iscsi_spec.h 00:03:57.003 TEST_HEADER include/spdk/json.h 00:03:57.003 TEST_HEADER include/spdk/jsonrpc.h 00:03:57.003 TEST_HEADER include/spdk/keyring.h 00:03:57.003 TEST_HEADER include/spdk/keyring_module.h 00:03:57.003 TEST_HEADER include/spdk/log.h 00:03:57.003 TEST_HEADER include/spdk/likely.h 00:03:57.003 TEST_HEADER include/spdk/lvol.h 00:03:57.003 TEST_HEADER include/spdk/mmio.h 00:03:57.003 TEST_HEADER include/spdk/nbd.h 00:03:57.003 TEST_HEADER include/spdk/notify.h 00:03:57.003 TEST_HEADER include/spdk/memory.h 00:03:57.003 TEST_HEADER include/spdk/nvme.h 00:03:57.003 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:57.003 TEST_HEADER include/spdk/nvme_intel.h 00:03:57.003 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:57.003 TEST_HEADER include/spdk/nvme_spec.h 00:03:57.003 TEST_HEADER include/spdk/nvme_zns.h 00:03:57.003 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:57.003 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:57.003 TEST_HEADER include/spdk/nvmf_spec.h 00:03:57.003 TEST_HEADER include/spdk/nvmf_transport.h 00:03:57.003 TEST_HEADER include/spdk/nvmf.h 00:03:57.276 TEST_HEADER include/spdk/opal.h 00:03:57.276 TEST_HEADER include/spdk/opal_spec.h 00:03:57.276 TEST_HEADER include/spdk/pci_ids.h 00:03:57.276 TEST_HEADER include/spdk/queue.h 00:03:57.277 TEST_HEADER include/spdk/pipe.h 00:03:57.277 TEST_HEADER include/spdk/scheduler.h 00:03:57.277 TEST_HEADER include/spdk/rpc.h 00:03:57.277 TEST_HEADER include/spdk/reduce.h 00:03:57.277 TEST_HEADER include/spdk/scsi_spec.h 00:03:57.277 TEST_HEADER include/spdk/scsi.h 00:03:57.277 TEST_HEADER include/spdk/sock.h 00:03:57.277 TEST_HEADER include/spdk/stdinc.h 00:03:57.277 TEST_HEADER include/spdk/string.h 00:03:57.277 TEST_HEADER include/spdk/thread.h 00:03:57.277 TEST_HEADER include/spdk/trace.h 00:03:57.277 TEST_HEADER include/spdk/tree.h 00:03:57.277 TEST_HEADER include/spdk/trace_parser.h 00:03:57.277 TEST_HEADER include/spdk/ublk.h 00:03:57.277 TEST_HEADER include/spdk/uuid.h 00:03:57.277 TEST_HEADER include/spdk/util.h 00:03:57.277 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:57.277 TEST_HEADER include/spdk/version.h 00:03:57.277 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:57.277 TEST_HEADER include/spdk/vhost.h 00:03:57.277 TEST_HEADER include/spdk/xor.h 00:03:57.277 TEST_HEADER include/spdk/zipf.h 00:03:57.277 TEST_HEADER include/spdk/vmd.h 00:03:57.277 CXX test/cpp_headers/accel.o 00:03:57.277 CXX test/cpp_headers/accel_module.o 00:03:57.277 CXX test/cpp_headers/assert.o 00:03:57.277 CXX test/cpp_headers/barrier.o 00:03:57.277 CXX test/cpp_headers/base64.o 00:03:57.277 CXX test/cpp_headers/bdev.o 00:03:57.277 CXX test/cpp_headers/bdev_module.o 00:03:57.277 CXX test/cpp_headers/bdev_zone.o 00:03:57.277 CXX test/cpp_headers/bit_array.o 00:03:57.277 CXX test/cpp_headers/blobfs_bdev.o 00:03:57.277 CXX test/cpp_headers/blob_bdev.o 00:03:57.277 CXX test/cpp_headers/blobfs.o 00:03:57.277 CXX test/cpp_headers/bit_pool.o 00:03:57.277 CXX test/cpp_headers/blob.o 00:03:57.277 CXX test/cpp_headers/config.o 00:03:57.277 CXX test/cpp_headers/conf.o 00:03:57.277 CXX test/cpp_headers/cpuset.o 00:03:57.277 CXX test/cpp_headers/crc16.o 00:03:57.277 CC app/spdk_tgt/spdk_tgt.o 00:03:57.277 CXX test/cpp_headers/crc64.o 00:03:57.277 CXX test/cpp_headers/dif.o 00:03:57.277 CXX test/cpp_headers/endian.o 00:03:57.277 CXX test/cpp_headers/dma.o 00:03:57.277 CXX test/cpp_headers/env_dpdk.o 00:03:57.277 CXX test/cpp_headers/event.o 00:03:57.277 CXX test/cpp_headers/crc32.o 00:03:57.277 CXX test/cpp_headers/env.o 00:03:57.277 CXX test/cpp_headers/fd_group.o 00:03:57.277 CXX test/cpp_headers/fd.o 00:03:57.277 CXX test/cpp_headers/file.o 00:03:57.277 CXX test/cpp_headers/hexlify.o 00:03:57.277 CXX test/cpp_headers/gpt_spec.o 00:03:57.277 CXX test/cpp_headers/ftl.o 00:03:57.277 CXX test/cpp_headers/histogram_data.o 00:03:57.277 CXX test/cpp_headers/idxd.o 00:03:57.277 CXX test/cpp_headers/idxd_spec.o 00:03:57.277 CXX test/cpp_headers/init.o 00:03:57.277 CXX test/cpp_headers/ioat.o 00:03:57.277 CXX test/cpp_headers/ioat_spec.o 00:03:57.277 CXX test/cpp_headers/json.o 00:03:57.277 CXX test/cpp_headers/iscsi_spec.o 00:03:57.277 CXX test/cpp_headers/jsonrpc.o 00:03:57.277 CXX test/cpp_headers/likely.o 00:03:57.277 CXX test/cpp_headers/keyring.o 00:03:57.277 CXX test/cpp_headers/keyring_module.o 00:03:57.277 CXX test/cpp_headers/log.o 00:03:57.277 CXX test/cpp_headers/lvol.o 00:03:57.277 CXX test/cpp_headers/memory.o 00:03:57.277 CC examples/ioat/perf/perf.o 00:03:57.277 CXX test/cpp_headers/nbd.o 00:03:57.277 CXX test/cpp_headers/notify.o 00:03:57.277 CXX test/cpp_headers/mmio.o 00:03:57.277 CXX test/cpp_headers/nvme_intel.o 00:03:57.277 CXX test/cpp_headers/nvme.o 00:03:57.277 CXX test/cpp_headers/nvme_ocssd.o 00:03:57.277 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:57.277 CXX test/cpp_headers/nvme_zns.o 00:03:57.277 CXX test/cpp_headers/nvme_spec.o 00:03:57.277 CXX test/cpp_headers/nvmf_cmd.o 00:03:57.277 CC examples/ioat/verify/verify.o 00:03:57.277 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:57.277 CXX test/cpp_headers/nvmf.o 00:03:57.277 CXX test/cpp_headers/nvmf_spec.o 00:03:57.277 CXX test/cpp_headers/nvmf_transport.o 00:03:57.277 CXX test/cpp_headers/opal_spec.o 00:03:57.277 CXX test/cpp_headers/opal.o 00:03:57.277 CXX test/cpp_headers/pci_ids.o 00:03:57.277 CXX test/cpp_headers/pipe.o 00:03:57.277 CXX test/cpp_headers/queue.o 00:03:57.277 CXX test/cpp_headers/reduce.o 00:03:57.277 CC test/app/jsoncat/jsoncat.o 00:03:57.277 CC app/fio/nvme/fio_plugin.o 00:03:57.277 CC test/env/vtophys/vtophys.o 00:03:57.277 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:57.277 CC test/thread/poller_perf/poller_perf.o 00:03:57.277 CC test/app/stub/stub.o 00:03:57.277 CC test/app/histogram_perf/histogram_perf.o 00:03:57.277 CC examples/util/zipf/zipf.o 00:03:57.277 CC test/env/memory/memory_ut.o 00:03:57.277 CC test/app/bdev_svc/bdev_svc.o 00:03:57.277 CC test/env/pci/pci_ut.o 00:03:57.277 CXX test/cpp_headers/rpc.o 00:03:57.277 CC app/fio/bdev/fio_plugin.o 00:03:57.546 CC test/dma/test_dma/test_dma.o 00:03:57.546 LINK spdk_lspci 00:03:57.546 LINK interrupt_tgt 00:03:57.546 LINK nvmf_tgt 00:03:57.546 LINK spdk_nvme_discover 00:03:57.546 LINK spdk_trace_record 00:03:57.810 CC test/env/mem_callbacks/mem_callbacks.o 00:03:57.810 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:57.810 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:57.810 LINK rpc_client_test 00:03:57.810 LINK env_dpdk_post_init 00:03:57.810 CXX test/cpp_headers/scheduler.o 00:03:57.810 CXX test/cpp_headers/scsi.o 00:03:57.810 LINK vtophys 00:03:57.810 CXX test/cpp_headers/scsi_spec.o 00:03:57.810 CXX test/cpp_headers/sock.o 00:03:57.810 CXX test/cpp_headers/stdinc.o 00:03:57.810 CXX test/cpp_headers/string.o 00:03:57.810 CXX test/cpp_headers/thread.o 00:03:57.810 CXX test/cpp_headers/trace_parser.o 00:03:57.810 CXX test/cpp_headers/trace.o 00:03:57.810 CXX test/cpp_headers/tree.o 00:03:57.810 CXX test/cpp_headers/ublk.o 00:03:57.810 CXX test/cpp_headers/util.o 00:03:57.810 CXX test/cpp_headers/uuid.o 00:03:57.810 CXX test/cpp_headers/version.o 00:03:57.810 CXX test/cpp_headers/vfio_user_pci.o 00:03:57.810 CXX test/cpp_headers/vfio_user_spec.o 00:03:57.810 CXX test/cpp_headers/vhost.o 00:03:57.810 CXX test/cpp_headers/vmd.o 00:03:57.810 CXX test/cpp_headers/xor.o 00:03:57.810 CXX test/cpp_headers/zipf.o 00:03:57.810 LINK iscsi_tgt 00:03:57.810 LINK jsoncat 00:03:57.810 LINK poller_perf 00:03:57.810 LINK histogram_perf 00:03:57.810 LINK spdk_trace 00:03:57.810 LINK zipf 00:03:57.810 LINK ioat_perf 00:03:57.810 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:57.810 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:57.810 LINK verify 00:03:57.810 LINK stub 00:03:57.810 LINK spdk_tgt 00:03:57.810 LINK bdev_svc 00:03:58.068 LINK spdk_dd 00:03:58.068 LINK pci_ut 00:03:58.068 LINK spdk_nvme 00:03:58.326 LINK spdk_bdev 00:03:58.326 LINK spdk_nvme_perf 00:03:58.326 LINK nvme_fuzz 00:03:58.326 LINK test_dma 00:03:58.326 LINK spdk_top 00:03:58.326 CC app/vhost/vhost.o 00:03:58.326 LINK vhost_fuzz 00:03:58.326 CC examples/vmd/led/led.o 00:03:58.326 CC test/event/event_perf/event_perf.o 00:03:58.326 CC test/event/reactor/reactor.o 00:03:58.326 CC examples/idxd/perf/perf.o 00:03:58.326 CC examples/vmd/lsvmd/lsvmd.o 00:03:58.326 CC test/event/reactor_perf/reactor_perf.o 00:03:58.326 CC examples/sock/hello_world/hello_sock.o 00:03:58.326 CC examples/thread/thread/thread_ex.o 00:03:58.326 CC test/event/app_repeat/app_repeat.o 00:03:58.326 LINK spdk_nvme_identify 00:03:58.326 CC test/event/scheduler/scheduler.o 00:03:58.326 LINK mem_callbacks 00:03:58.584 LINK vhost 00:03:58.584 LINK lsvmd 00:03:58.584 LINK led 00:03:58.584 LINK event_perf 00:03:58.584 LINK reactor 00:03:58.584 LINK reactor_perf 00:03:58.584 LINK app_repeat 00:03:58.584 LINK hello_sock 00:03:58.584 LINK scheduler 00:03:58.584 LINK thread 00:03:58.584 LINK idxd_perf 00:03:58.584 LINK memory_ut 00:03:58.584 CC test/nvme/startup/startup.o 00:03:58.584 CC test/nvme/aer/aer.o 00:03:58.584 CC test/nvme/connect_stress/connect_stress.o 00:03:58.584 CC test/nvme/simple_copy/simple_copy.o 00:03:58.584 CC test/nvme/overhead/overhead.o 00:03:58.584 CC test/nvme/reserve/reserve.o 00:03:58.584 CC test/nvme/err_injection/err_injection.o 00:03:58.584 CC test/nvme/reset/reset.o 00:03:58.584 CC test/nvme/sgl/sgl.o 00:03:58.584 CC test/nvme/fused_ordering/fused_ordering.o 00:03:58.584 CC test/nvme/cuse/cuse.o 00:03:58.584 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:58.584 CC test/nvme/boot_partition/boot_partition.o 00:03:58.584 CC test/nvme/compliance/nvme_compliance.o 00:03:58.841 CC test/nvme/fdp/fdp.o 00:03:58.841 CC test/nvme/e2edp/nvme_dp.o 00:03:58.841 CC test/accel/dif/dif.o 00:03:58.841 CC test/blobfs/mkfs/mkfs.o 00:03:58.841 CC test/lvol/esnap/esnap.o 00:03:58.841 LINK startup 00:03:58.841 LINK boot_partition 00:03:58.841 LINK err_injection 00:03:58.841 LINK fused_ordering 00:03:58.841 LINK doorbell_aers 00:03:58.841 LINK connect_stress 00:03:58.841 LINK reserve 00:03:58.841 LINK mkfs 00:03:58.841 LINK simple_copy 00:03:58.841 LINK reset 00:03:58.841 LINK sgl 00:03:58.841 LINK aer 00:03:58.841 LINK overhead 00:03:59.099 LINK nvme_dp 00:03:59.099 LINK nvme_compliance 00:03:59.099 LINK fdp 00:03:59.099 CC examples/nvme/hotplug/hotplug.o 00:03:59.099 CC examples/nvme/arbitration/arbitration.o 00:03:59.099 CC examples/nvme/hello_world/hello_world.o 00:03:59.099 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:59.099 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:59.099 CC examples/nvme/abort/abort.o 00:03:59.099 CC examples/nvme/reconnect/reconnect.o 00:03:59.099 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:59.099 CC examples/accel/perf/accel_perf.o 00:03:59.099 LINK dif 00:03:59.099 CC examples/blob/cli/blobcli.o 00:03:59.099 LINK iscsi_fuzz 00:03:59.099 CC examples/blob/hello_world/hello_blob.o 00:03:59.099 LINK pmr_persistence 00:03:59.099 LINK hello_world 00:03:59.435 LINK cmb_copy 00:03:59.435 LINK hotplug 00:03:59.435 LINK arbitration 00:03:59.435 LINK reconnect 00:03:59.435 LINK abort 00:03:59.435 LINK hello_blob 00:03:59.435 LINK nvme_manage 00:03:59.435 LINK accel_perf 00:03:59.435 LINK blobcli 00:03:59.695 CC test/bdev/bdevio/bdevio.o 00:03:59.695 LINK cuse 00:03:59.954 CC examples/bdev/bdevperf/bdevperf.o 00:03:59.954 CC examples/bdev/hello_world/hello_bdev.o 00:03:59.954 LINK bdevio 00:04:00.213 LINK hello_bdev 00:04:00.472 LINK bdevperf 00:04:01.041 CC examples/nvmf/nvmf/nvmf.o 00:04:01.041 LINK nvmf 00:04:02.422 LINK esnap 00:04:02.422 00:04:02.422 real 0m44.308s 00:04:02.422 user 6m28.531s 00:04:02.422 sys 3m26.166s 00:04:02.422 01:04:24 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:04:02.422 01:04:24 make -- common/autotest_common.sh@10 -- $ set +x 00:04:02.422 ************************************ 00:04:02.422 END TEST make 00:04:02.422 ************************************ 00:04:02.422 01:04:24 -- common/autotest_common.sh@1142 -- $ return 0 00:04:02.422 01:04:24 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:02.422 01:04:24 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:02.422 01:04:24 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:02.422 01:04:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:02.422 01:04:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:02.422 01:04:24 -- pm/common@44 -- $ pid=611401 00:04:02.422 01:04:24 -- pm/common@50 -- $ kill -TERM 611401 00:04:02.422 01:04:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:02.422 01:04:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:02.422 01:04:24 -- pm/common@44 -- $ pid=611403 00:04:02.422 01:04:24 -- pm/common@50 -- $ kill -TERM 611403 00:04:02.422 01:04:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:02.422 01:04:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:02.422 01:04:24 -- pm/common@44 -- $ pid=611404 00:04:02.422 01:04:24 -- pm/common@50 -- $ kill -TERM 611404 00:04:02.422 01:04:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:02.422 01:04:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:02.422 01:04:24 -- pm/common@44 -- $ pid=611431 00:04:02.422 01:04:24 -- pm/common@50 -- $ sudo -E kill -TERM 611431 00:04:02.682 01:04:25 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:02.682 01:04:25 -- nvmf/common.sh@7 -- # uname -s 00:04:02.682 01:04:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:02.682 01:04:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:02.682 01:04:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:02.682 01:04:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:02.682 01:04:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:02.682 01:04:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:02.682 01:04:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:02.682 01:04:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:02.682 01:04:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:02.682 01:04:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:02.682 01:04:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:02.682 01:04:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:02.682 01:04:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:02.682 01:04:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:02.682 01:04:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:02.682 01:04:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:02.682 01:04:25 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:02.682 01:04:25 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:02.682 01:04:25 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:02.682 01:04:25 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:02.682 01:04:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.682 01:04:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.682 01:04:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.682 01:04:25 -- paths/export.sh@5 -- # export PATH 00:04:02.682 01:04:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.682 01:04:25 -- nvmf/common.sh@47 -- # : 0 00:04:02.682 01:04:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:02.682 01:04:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:02.682 01:04:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:02.682 01:04:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:02.682 01:04:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:02.682 01:04:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:02.682 01:04:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:02.682 01:04:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:02.682 01:04:25 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:02.682 01:04:25 -- spdk/autotest.sh@32 -- # uname -s 00:04:02.682 01:04:25 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:02.682 01:04:25 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:02.682 01:04:25 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:02.682 01:04:25 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:02.682 01:04:25 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:02.682 01:04:25 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:02.682 01:04:25 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:02.682 01:04:25 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:02.682 01:04:25 -- spdk/autotest.sh@48 -- # udevadm_pid=670411 00:04:02.682 01:04:25 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:02.682 01:04:25 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:02.682 01:04:25 -- pm/common@17 -- # local monitor 00:04:02.682 01:04:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:02.682 01:04:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:02.682 01:04:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:02.682 01:04:25 -- pm/common@21 -- # date +%s 00:04:02.682 01:04:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:02.682 01:04:25 -- pm/common@21 -- # date +%s 00:04:02.682 01:04:25 -- pm/common@25 -- # sleep 1 00:04:02.682 01:04:25 -- pm/common@21 -- # date +%s 00:04:02.682 01:04:25 -- pm/common@21 -- # date +%s 00:04:02.682 01:04:25 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721862265 00:04:02.682 01:04:25 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721862265 00:04:02.682 01:04:25 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721862265 00:04:02.682 01:04:25 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721862265 00:04:02.682 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721862265_collect-vmstat.pm.log 00:04:02.682 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721862265_collect-cpu-load.pm.log 00:04:02.682 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721862265_collect-cpu-temp.pm.log 00:04:02.682 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721862265_collect-bmc-pm.bmc.pm.log 00:04:03.621 01:04:26 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:03.621 01:04:26 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:03.621 01:04:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:03.621 01:04:26 -- common/autotest_common.sh@10 -- # set +x 00:04:03.621 01:04:26 -- spdk/autotest.sh@59 -- # create_test_list 00:04:03.621 01:04:26 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:03.621 01:04:26 -- common/autotest_common.sh@10 -- # set +x 00:04:03.881 01:04:26 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:03.881 01:04:26 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:03.881 01:04:26 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:03.881 01:04:26 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:03.881 01:04:26 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:03.881 01:04:26 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:03.881 01:04:26 -- common/autotest_common.sh@1455 -- # uname 00:04:03.881 01:04:26 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:03.881 01:04:26 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:03.881 01:04:26 -- common/autotest_common.sh@1475 -- # uname 00:04:03.881 01:04:26 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:03.881 01:04:26 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:03.881 01:04:26 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:03.881 01:04:26 -- spdk/autotest.sh@72 -- # hash lcov 00:04:03.881 01:04:26 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:03.881 01:04:26 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:03.881 --rc lcov_branch_coverage=1 00:04:03.881 --rc lcov_function_coverage=1 00:04:03.881 --rc genhtml_branch_coverage=1 00:04:03.881 --rc genhtml_function_coverage=1 00:04:03.881 --rc genhtml_legend=1 00:04:03.881 --rc geninfo_all_blocks=1 00:04:03.881 ' 00:04:03.881 01:04:26 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:03.881 --rc lcov_branch_coverage=1 00:04:03.881 --rc lcov_function_coverage=1 00:04:03.881 --rc genhtml_branch_coverage=1 00:04:03.881 --rc genhtml_function_coverage=1 00:04:03.881 --rc genhtml_legend=1 00:04:03.881 --rc geninfo_all_blocks=1 00:04:03.881 ' 00:04:03.881 01:04:26 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:03.881 --rc lcov_branch_coverage=1 00:04:03.881 --rc lcov_function_coverage=1 00:04:03.881 --rc genhtml_branch_coverage=1 00:04:03.881 --rc genhtml_function_coverage=1 00:04:03.881 --rc genhtml_legend=1 00:04:03.881 --rc geninfo_all_blocks=1 00:04:03.881 --no-external' 00:04:03.881 01:04:26 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:03.881 --rc lcov_branch_coverage=1 00:04:03.881 --rc lcov_function_coverage=1 00:04:03.881 --rc genhtml_branch_coverage=1 00:04:03.881 --rc genhtml_function_coverage=1 00:04:03.881 --rc genhtml_legend=1 00:04:03.881 --rc geninfo_all_blocks=1 00:04:03.881 --no-external' 00:04:03.881 01:04:26 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:03.881 lcov: LCOV version 1.14 00:04:03.881 01:04:26 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:05.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:05.259 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:05.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:05.259 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:05.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:05.259 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:05.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:05.259 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:05.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:05.259 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:05.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:05.259 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:05.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:05.259 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:05.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:05.259 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:05.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:05.259 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:05.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:05.259 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:05.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:05.259 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:05.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:05.259 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:05.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:05.259 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:05.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:05.259 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:05.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:05.260 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:05.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:05.260 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:05.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:05.260 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:05.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:05.260 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:05.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:05.260 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:05.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:05.260 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:05.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:05.260 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:05.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:05.260 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:05.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:05.260 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:05.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:05.260 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:05.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:05.260 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:05.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:05.260 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:05.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:05.260 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:05.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:05.260 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:05.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:05.260 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:05.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:05.260 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:05.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:05.260 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:05.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:05.260 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:05.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:05.260 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:05.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:05.260 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:05.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:05.260 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:05.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:05.260 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:05.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:05.260 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:05.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:05.260 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:05.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:05.260 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:05.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:05.260 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:05.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:05.260 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:05.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:05.260 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:05.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:05.260 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:05.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:05.260 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:05.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:05.260 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:05.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:05.260 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:05.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:05.260 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:05.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:05.260 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:05.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:05.260 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:05.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:05.260 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:05.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:05.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:05.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:05.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:05.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:05.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:05.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:05.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:05.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:05.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:05.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:05.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:05.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:05.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:05.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:05.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:05.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:05.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:05.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:05.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:05.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:05.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:05.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:05.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:05.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:05.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:05.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:05.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:05.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:05.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:05.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:05.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:05.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:05.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:05.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:05.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:05.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:05.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:05.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:05.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:05.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:05.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:05.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:05.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:05.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:05.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:05.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:05.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:05.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:05.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:05.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:05.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:05.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:05.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:05.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:05.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:05.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:05.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:05.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:05.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:05.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:05.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:05.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:05.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:05.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:05.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:05.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:05.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:05.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:05.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:05.780 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:05.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:05.780 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:05.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:05.780 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:05.780 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:17.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:17.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:27.981 01:04:50 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:27.981 01:04:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:27.981 01:04:50 -- common/autotest_common.sh@10 -- # set +x 00:04:27.981 01:04:50 -- spdk/autotest.sh@91 -- # rm -f 00:04:27.981 01:04:50 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:30.520 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:04:30.520 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:04:30.520 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:04:30.520 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:04:30.520 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:04:30.520 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:04:30.520 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:04:30.520 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:04:30.520 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:04:30.520 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:04:30.520 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:04:30.520 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:04:30.520 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:04:30.520 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:04:30.520 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:04:30.520 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:04:30.520 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:04:30.520 01:04:52 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:30.520 01:04:52 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:30.520 01:04:52 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:30.520 01:04:52 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:30.520 01:04:52 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:30.520 01:04:52 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:30.520 01:04:52 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:30.520 01:04:52 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:30.520 01:04:52 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:30.520 01:04:53 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:30.520 01:04:53 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:30.520 01:04:53 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:30.520 01:04:53 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:30.520 01:04:53 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:30.520 01:04:53 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:30.780 No valid GPT data, bailing 00:04:30.780 01:04:53 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:30.780 01:04:53 -- scripts/common.sh@391 -- # pt= 00:04:30.780 01:04:53 -- scripts/common.sh@392 -- # return 1 00:04:30.780 01:04:53 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:30.780 1+0 records in 00:04:30.780 1+0 records out 00:04:30.780 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00608492 s, 172 MB/s 00:04:30.780 01:04:53 -- spdk/autotest.sh@118 -- # sync 00:04:30.780 01:04:53 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:30.780 01:04:53 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:30.780 01:04:53 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:36.182 01:04:58 -- spdk/autotest.sh@124 -- # uname -s 00:04:36.182 01:04:58 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:36.182 01:04:58 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:36.182 01:04:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.182 01:04:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.182 01:04:58 -- common/autotest_common.sh@10 -- # set +x 00:04:36.182 ************************************ 00:04:36.182 START TEST setup.sh 00:04:36.182 ************************************ 00:04:36.182 01:04:58 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:36.182 * Looking for test storage... 00:04:36.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:36.182 01:04:58 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:36.182 01:04:58 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:36.182 01:04:58 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:36.182 01:04:58 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.182 01:04:58 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.182 01:04:58 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:36.182 ************************************ 00:04:36.182 START TEST acl 00:04:36.182 ************************************ 00:04:36.182 01:04:58 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:36.182 * Looking for test storage... 00:04:36.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:36.182 01:04:58 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:36.182 01:04:58 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:36.182 01:04:58 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:36.182 01:04:58 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:36.182 01:04:58 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:36.182 01:04:58 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:36.182 01:04:58 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:36.182 01:04:58 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:36.182 01:04:58 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:36.182 01:04:58 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:36.182 01:04:58 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:36.182 01:04:58 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:36.182 01:04:58 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:36.182 01:04:58 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:36.182 01:04:58 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:36.182 01:04:58 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:39.479 01:05:01 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:39.479 01:05:01 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:39.479 01:05:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.479 01:05:01 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:39.479 01:05:01 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.479 01:05:01 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:42.021 Hugepages 00:04:42.021 node hugesize free / total 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.021 00:04:42.021 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:42.021 01:05:04 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:42.021 01:05:04 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.021 01:05:04 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.021 01:05:04 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:42.021 ************************************ 00:04:42.021 START TEST denied 00:04:42.021 ************************************ 00:04:42.021 01:05:04 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:42.021 01:05:04 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:04:42.021 01:05:04 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:42.021 01:05:04 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:04:42.022 01:05:04 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.022 01:05:04 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:44.564 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:04:44.564 01:05:06 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:04:44.564 01:05:06 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:44.564 01:05:06 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:44.564 01:05:06 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:04:44.564 01:05:06 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:04:44.564 01:05:06 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:44.564 01:05:06 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:44.564 01:05:06 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:44.564 01:05:06 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:44.564 01:05:06 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:47.863 00:04:47.863 real 0m5.923s 00:04:47.863 user 0m1.785s 00:04:47.863 sys 0m3.406s 00:04:47.863 01:05:10 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.863 01:05:10 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:47.863 ************************************ 00:04:47.863 END TEST denied 00:04:47.863 ************************************ 00:04:47.863 01:05:10 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:47.863 01:05:10 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:47.863 01:05:10 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.863 01:05:10 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.863 01:05:10 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:47.863 ************************************ 00:04:47.863 START TEST allowed 00:04:47.863 ************************************ 00:04:47.863 01:05:10 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:47.863 01:05:10 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:04:47.863 01:05:10 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:04:47.863 01:05:10 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:47.863 01:05:10 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.863 01:05:10 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:52.063 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:52.063 01:05:14 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:52.063 01:05:14 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:52.063 01:05:14 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:52.063 01:05:14 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:52.063 01:05:14 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:55.359 00:04:55.359 real 0m6.801s 00:04:55.359 user 0m2.102s 00:04:55.359 sys 0m3.819s 00:04:55.359 01:05:17 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.359 01:05:17 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:55.359 ************************************ 00:04:55.359 END TEST allowed 00:04:55.359 ************************************ 00:04:55.359 01:05:17 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:55.359 00:04:55.359 real 0m18.656s 00:04:55.359 user 0m6.112s 00:04:55.359 sys 0m11.091s 00:04:55.359 01:05:17 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.359 01:05:17 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:55.359 ************************************ 00:04:55.359 END TEST acl 00:04:55.359 ************************************ 00:04:55.359 01:05:17 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:55.359 01:05:17 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:55.359 01:05:17 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.359 01:05:17 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.359 01:05:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:55.359 ************************************ 00:04:55.359 START TEST hugepages 00:04:55.359 ************************************ 00:04:55.359 01:05:17 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:55.359 * Looking for test storage... 00:04:55.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 168461840 kB' 'MemAvailable: 171696928 kB' 'Buffers: 3896 kB' 'Cached: 14670912 kB' 'SwapCached: 0 kB' 'Active: 11525332 kB' 'Inactive: 3694312 kB' 'Active(anon): 11107376 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 548064 kB' 'Mapped: 170684 kB' 'Shmem: 10562540 kB' 'KReclaimable: 534072 kB' 'Slab: 1192060 kB' 'SReclaimable: 534072 kB' 'SUnreclaim: 657988 kB' 'KernelStack: 20560 kB' 'PageTables: 8916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982020 kB' 'Committed_AS: 12650912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317000 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3937236 kB' 'DirectMap2M: 33490944 kB' 'DirectMap1G: 164626432 kB' 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.359 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.360 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:55.361 01:05:17 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:55.361 01:05:17 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.361 01:05:17 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.361 01:05:17 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:55.361 ************************************ 00:04:55.361 START TEST default_setup 00:04:55.361 ************************************ 00:04:55.361 01:05:17 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:55.361 01:05:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:55.361 01:05:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:55.361 01:05:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:55.361 01:05:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:55.361 01:05:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:55.361 01:05:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:55.361 01:05:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:55.361 01:05:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:55.361 01:05:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:55.361 01:05:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:55.361 01:05:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:55.361 01:05:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:55.361 01:05:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:55.361 01:05:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:55.361 01:05:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:55.361 01:05:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:55.361 01:05:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:55.361 01:05:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:55.361 01:05:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:55.361 01:05:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:55.361 01:05:17 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.361 01:05:17 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:57.979 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:57.979 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:57.979 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:57.979 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:57.979 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:57.979 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:57.979 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:57.979 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:57.979 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:57.979 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:57.979 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:57.979 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:57.979 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:57.979 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:57.979 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:57.979 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:58.549 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:58.549 01:05:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:58.549 01:05:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:58.549 01:05:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:58.549 01:05:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:58.549 01:05:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170604148 kB' 'MemAvailable: 173839220 kB' 'Buffers: 3896 kB' 'Cached: 14671012 kB' 'SwapCached: 0 kB' 'Active: 11543976 kB' 'Inactive: 3694312 kB' 'Active(anon): 11126020 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 566608 kB' 'Mapped: 170756 kB' 'Shmem: 10562640 kB' 'KReclaimable: 534040 kB' 'Slab: 1190780 kB' 'SReclaimable: 534040 kB' 'SUnreclaim: 656740 kB' 'KernelStack: 20656 kB' 'PageTables: 8928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12670628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317192 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3937236 kB' 'DirectMap2M: 33490944 kB' 'DirectMap1G: 164626432 kB' 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.550 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170607096 kB' 'MemAvailable: 173842152 kB' 'Buffers: 3896 kB' 'Cached: 14671016 kB' 'SwapCached: 0 kB' 'Active: 11544056 kB' 'Inactive: 3694312 kB' 'Active(anon): 11126100 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 566888 kB' 'Mapped: 170680 kB' 'Shmem: 10562644 kB' 'KReclaimable: 534008 kB' 'Slab: 1190748 kB' 'SReclaimable: 534008 kB' 'SUnreclaim: 656740 kB' 'KernelStack: 20688 kB' 'PageTables: 9292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12670648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317288 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3937236 kB' 'DirectMap2M: 33490944 kB' 'DirectMap1G: 164626432 kB' 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.813 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.814 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170607776 kB' 'MemAvailable: 173842832 kB' 'Buffers: 3896 kB' 'Cached: 14671032 kB' 'SwapCached: 0 kB' 'Active: 11544376 kB' 'Inactive: 3694312 kB' 'Active(anon): 11126420 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 567212 kB' 'Mapped: 170600 kB' 'Shmem: 10562660 kB' 'KReclaimable: 534008 kB' 'Slab: 1190800 kB' 'SReclaimable: 534008 kB' 'SUnreclaim: 656792 kB' 'KernelStack: 20864 kB' 'PageTables: 9604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12670668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317352 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3937236 kB' 'DirectMap2M: 33490944 kB' 'DirectMap1G: 164626432 kB' 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.815 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.816 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:58.817 nr_hugepages=1024 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:58.817 resv_hugepages=0 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:58.817 surplus_hugepages=0 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:58.817 anon_hugepages=0 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170610732 kB' 'MemAvailable: 173845788 kB' 'Buffers: 3896 kB' 'Cached: 14671056 kB' 'SwapCached: 0 kB' 'Active: 11544632 kB' 'Inactive: 3694312 kB' 'Active(anon): 11126676 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 567512 kB' 'Mapped: 170600 kB' 'Shmem: 10562684 kB' 'KReclaimable: 534008 kB' 'Slab: 1190792 kB' 'SReclaimable: 534008 kB' 'SUnreclaim: 656784 kB' 'KernelStack: 20864 kB' 'PageTables: 9592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12669200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317304 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3937236 kB' 'DirectMap2M: 33490944 kB' 'DirectMap1G: 164626432 kB' 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.817 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.818 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.819 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 91722168 kB' 'MemUsed: 5893460 kB' 'SwapCached: 0 kB' 'Active: 2223452 kB' 'Inactive: 219552 kB' 'Active(anon): 2061628 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 219552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2279264 kB' 'Mapped: 85448 kB' 'AnonPages: 167408 kB' 'Shmem: 1897888 kB' 'KernelStack: 11960 kB' 'PageTables: 4016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 347972 kB' 'Slab: 656920 kB' 'SReclaimable: 347972 kB' 'SUnreclaim: 308948 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.820 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.821 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.821 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.821 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.821 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.821 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.821 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.821 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.821 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.821 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.821 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.821 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.821 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.821 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.821 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.821 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.821 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.821 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.821 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.821 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.821 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.821 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.821 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.821 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.821 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.821 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.821 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:58.821 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:58.821 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:58.821 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.821 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:58.821 01:05:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:58.821 01:05:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:58.821 01:05:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:58.821 01:05:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:58.821 01:05:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:58.821 01:05:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:58.821 node0=1024 expecting 1024 00:04:58.821 01:05:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:58.821 00:04:58.821 real 0m3.806s 00:04:58.821 user 0m1.155s 00:04:58.821 sys 0m1.791s 00:04:58.821 01:05:21 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.821 01:05:21 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:58.821 ************************************ 00:04:58.821 END TEST default_setup 00:04:58.821 ************************************ 00:04:58.821 01:05:21 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:58.821 01:05:21 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:58.821 01:05:21 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.821 01:05:21 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.821 01:05:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:58.821 ************************************ 00:04:58.821 START TEST per_node_1G_alloc 00:04:58.821 ************************************ 00:04:58.821 01:05:21 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:58.821 01:05:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:58.821 01:05:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:58.821 01:05:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:58.821 01:05:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:58.821 01:05:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:58.821 01:05:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:58.821 01:05:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:58.821 01:05:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:58.821 01:05:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:58.821 01:05:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:58.821 01:05:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:58.821 01:05:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:58.821 01:05:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:58.821 01:05:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:58.821 01:05:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:58.821 01:05:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:58.821 01:05:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:58.821 01:05:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:58.821 01:05:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:58.821 01:05:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:58.821 01:05:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:58.821 01:05:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:58.821 01:05:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:58.821 01:05:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:58.821 01:05:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:58.821 01:05:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.821 01:05:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:00.730 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:00.730 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:00.730 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:00.730 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:00.730 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:00.995 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:00.995 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:00.995 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:00.995 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:00.995 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:00.995 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:00.995 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:00.995 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:00.995 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:00.995 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:00.995 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:00.995 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:00.995 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:05:00.995 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:00.995 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:00.995 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:00.995 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:00.995 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:00.995 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:00.995 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:00.995 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:00.995 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:00.995 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:00.995 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:00.995 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:00.995 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.995 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.995 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.995 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.995 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.995 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.995 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.995 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.995 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170586892 kB' 'MemAvailable: 173821948 kB' 'Buffers: 3896 kB' 'Cached: 14671144 kB' 'SwapCached: 0 kB' 'Active: 11545780 kB' 'Inactive: 3694312 kB' 'Active(anon): 11127824 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 567784 kB' 'Mapped: 170764 kB' 'Shmem: 10562772 kB' 'KReclaimable: 534008 kB' 'Slab: 1190308 kB' 'SReclaimable: 534008 kB' 'SUnreclaim: 656300 kB' 'KernelStack: 20704 kB' 'PageTables: 9048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12668400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317272 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3937236 kB' 'DirectMap2M: 33490944 kB' 'DirectMap1G: 164626432 kB' 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.996 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170588772 kB' 'MemAvailable: 173823828 kB' 'Buffers: 3896 kB' 'Cached: 14671148 kB' 'SwapCached: 0 kB' 'Active: 11545112 kB' 'Inactive: 3694312 kB' 'Active(anon): 11127156 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 567140 kB' 'Mapped: 170700 kB' 'Shmem: 10562776 kB' 'KReclaimable: 534008 kB' 'Slab: 1190308 kB' 'SReclaimable: 534008 kB' 'SUnreclaim: 656300 kB' 'KernelStack: 20688 kB' 'PageTables: 8992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12668420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317256 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3937236 kB' 'DirectMap2M: 33490944 kB' 'DirectMap1G: 164626432 kB' 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.997 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.998 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170589584 kB' 'MemAvailable: 173824640 kB' 'Buffers: 3896 kB' 'Cached: 14671148 kB' 'SwapCached: 0 kB' 'Active: 11544908 kB' 'Inactive: 3694312 kB' 'Active(anon): 11126952 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 567868 kB' 'Mapped: 170620 kB' 'Shmem: 10562776 kB' 'KReclaimable: 534008 kB' 'Slab: 1190236 kB' 'SReclaimable: 534008 kB' 'SUnreclaim: 656228 kB' 'KernelStack: 20672 kB' 'PageTables: 8956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12678932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317272 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3937236 kB' 'DirectMap2M: 33490944 kB' 'DirectMap1G: 164626432 kB' 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.999 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.000 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:01.001 nr_hugepages=1024 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:01.001 resv_hugepages=0 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:01.001 surplus_hugepages=0 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:01.001 anon_hugepages=0 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170589928 kB' 'MemAvailable: 173824984 kB' 'Buffers: 3896 kB' 'Cached: 14671188 kB' 'SwapCached: 0 kB' 'Active: 11544364 kB' 'Inactive: 3694312 kB' 'Active(anon): 11126408 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 566800 kB' 'Mapped: 170620 kB' 'Shmem: 10562816 kB' 'KReclaimable: 534008 kB' 'Slab: 1190236 kB' 'SReclaimable: 534008 kB' 'SUnreclaim: 656228 kB' 'KernelStack: 20640 kB' 'PageTables: 8812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12668232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317224 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3937236 kB' 'DirectMap2M: 33490944 kB' 'DirectMap1G: 164626432 kB' 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.001 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.002 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92762988 kB' 'MemUsed: 4852640 kB' 'SwapCached: 0 kB' 'Active: 2223884 kB' 'Inactive: 219552 kB' 'Active(anon): 2062060 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 219552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2279268 kB' 'Mapped: 85468 kB' 'AnonPages: 167292 kB' 'Shmem: 1897892 kB' 'KernelStack: 11720 kB' 'PageTables: 3408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 347972 kB' 'Slab: 656644 kB' 'SReclaimable: 347972 kB' 'SUnreclaim: 308672 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.003 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.004 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.266 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.266 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.266 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.266 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.266 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.266 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.266 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.266 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.266 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.266 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.266 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.266 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.266 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.266 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.266 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.266 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.266 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.266 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.266 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.266 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.266 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.266 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.266 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.266 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.266 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.266 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.266 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.266 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.266 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.266 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.266 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.266 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765508 kB' 'MemFree: 77830064 kB' 'MemUsed: 15935444 kB' 'SwapCached: 0 kB' 'Active: 9320476 kB' 'Inactive: 3474760 kB' 'Active(anon): 9064344 kB' 'Inactive(anon): 0 kB' 'Active(file): 256132 kB' 'Inactive(file): 3474760 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12395860 kB' 'Mapped: 85152 kB' 'AnonPages: 399444 kB' 'Shmem: 8664968 kB' 'KernelStack: 8904 kB' 'PageTables: 5356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 186036 kB' 'Slab: 533576 kB' 'SReclaimable: 186036 kB' 'SUnreclaim: 347540 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.267 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:01.268 node0=512 expecting 512 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:01.268 node1=512 expecting 512 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:01.268 00:05:01.268 real 0m2.286s 00:05:01.268 user 0m0.791s 00:05:01.268 sys 0m1.317s 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.268 01:05:23 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:01.268 ************************************ 00:05:01.268 END TEST per_node_1G_alloc 00:05:01.268 ************************************ 00:05:01.268 01:05:23 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:01.268 01:05:23 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:01.268 01:05:23 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.269 01:05:23 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.269 01:05:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:01.269 ************************************ 00:05:01.269 START TEST even_2G_alloc 00:05:01.269 ************************************ 00:05:01.269 01:05:23 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:05:01.269 01:05:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:01.269 01:05:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:01.269 01:05:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:01.269 01:05:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:01.269 01:05:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:01.269 01:05:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:01.269 01:05:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:01.269 01:05:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:01.269 01:05:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:01.269 01:05:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:01.269 01:05:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:01.269 01:05:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:01.269 01:05:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:01.269 01:05:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:01.269 01:05:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:01.269 01:05:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:01.269 01:05:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:05:01.269 01:05:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:01.269 01:05:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:01.269 01:05:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:01.269 01:05:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:01.269 01:05:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:01.269 01:05:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:01.269 01:05:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:01.269 01:05:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:01.269 01:05:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:01.269 01:05:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.269 01:05:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:03.816 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:03.816 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:03.816 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:03.816 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:03.816 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:03.816 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:03.816 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:03.816 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:03.816 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:03.816 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:03.816 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:03.816 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:03.816 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:03.816 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:03.816 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:03.816 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:03.816 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:03.816 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:03.816 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:03.816 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:03.816 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:03.816 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:03.816 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:03.816 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:03.816 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:03.816 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:03.816 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:03.816 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:03.816 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:03.816 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.816 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.816 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.816 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170571372 kB' 'MemAvailable: 173806428 kB' 'Buffers: 3896 kB' 'Cached: 14671300 kB' 'SwapCached: 0 kB' 'Active: 11542348 kB' 'Inactive: 3694312 kB' 'Active(anon): 11124392 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564216 kB' 'Mapped: 169804 kB' 'Shmem: 10562928 kB' 'KReclaimable: 534008 kB' 'Slab: 1190332 kB' 'SReclaimable: 534008 kB' 'SUnreclaim: 656324 kB' 'KernelStack: 20688 kB' 'PageTables: 8844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12648876 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317240 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3937236 kB' 'DirectMap2M: 33490944 kB' 'DirectMap1G: 164626432 kB' 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.817 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170571952 kB' 'MemAvailable: 173807008 kB' 'Buffers: 3896 kB' 'Cached: 14671304 kB' 'SwapCached: 0 kB' 'Active: 11542712 kB' 'Inactive: 3694312 kB' 'Active(anon): 11124756 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564600 kB' 'Mapped: 169804 kB' 'Shmem: 10562932 kB' 'KReclaimable: 534008 kB' 'Slab: 1190332 kB' 'SReclaimable: 534008 kB' 'SUnreclaim: 656324 kB' 'KernelStack: 20672 kB' 'PageTables: 8788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12648892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317192 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3937236 kB' 'DirectMap2M: 33490944 kB' 'DirectMap1G: 164626432 kB' 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.818 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.819 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170572548 kB' 'MemAvailable: 173807604 kB' 'Buffers: 3896 kB' 'Cached: 14671304 kB' 'SwapCached: 0 kB' 'Active: 11541432 kB' 'Inactive: 3694312 kB' 'Active(anon): 11123476 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563280 kB' 'Mapped: 169736 kB' 'Shmem: 10562932 kB' 'KReclaimable: 534008 kB' 'Slab: 1190332 kB' 'SReclaimable: 534008 kB' 'SUnreclaim: 656324 kB' 'KernelStack: 20624 kB' 'PageTables: 8644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12648912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317192 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3937236 kB' 'DirectMap2M: 33490944 kB' 'DirectMap1G: 164626432 kB' 00:05:03.820 01:05:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.820 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.820 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.820 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.820 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.820 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.820 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.820 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.820 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.820 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.820 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.820 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.820 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.820 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.820 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.820 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.820 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.820 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.820 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.820 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.820 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.820 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.820 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.820 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.820 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.820 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.820 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.820 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.820 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.820 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.820 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.820 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.820 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.820 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.820 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.820 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.820 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.820 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.820 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.820 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.820 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.820 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.820 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.821 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:03.822 nr_hugepages=1024 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:03.822 resv_hugepages=0 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:03.822 surplus_hugepages=0 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:03.822 anon_hugepages=0 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170572048 kB' 'MemAvailable: 173807104 kB' 'Buffers: 3896 kB' 'Cached: 14671304 kB' 'SwapCached: 0 kB' 'Active: 11541604 kB' 'Inactive: 3694312 kB' 'Active(anon): 11123648 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563932 kB' 'Mapped: 169660 kB' 'Shmem: 10562932 kB' 'KReclaimable: 534008 kB' 'Slab: 1190336 kB' 'SReclaimable: 534008 kB' 'SUnreclaim: 656328 kB' 'KernelStack: 20640 kB' 'PageTables: 8684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12648936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317192 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3937236 kB' 'DirectMap2M: 33490944 kB' 'DirectMap1G: 164626432 kB' 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.822 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.823 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92759668 kB' 'MemUsed: 4855960 kB' 'SwapCached: 0 kB' 'Active: 2224240 kB' 'Inactive: 219552 kB' 'Active(anon): 2062416 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 219552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2279360 kB' 'Mapped: 85140 kB' 'AnonPages: 167528 kB' 'Shmem: 1897984 kB' 'KernelStack: 11768 kB' 'PageTables: 3600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 347972 kB' 'Slab: 656880 kB' 'SReclaimable: 347972 kB' 'SUnreclaim: 308908 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.824 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:05:03.825 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765508 kB' 'MemFree: 77812128 kB' 'MemUsed: 15953380 kB' 'SwapCached: 0 kB' 'Active: 9317340 kB' 'Inactive: 3474760 kB' 'Active(anon): 9061208 kB' 'Inactive(anon): 0 kB' 'Active(file): 256132 kB' 'Inactive(file): 3474760 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12395896 kB' 'Mapped: 84520 kB' 'AnonPages: 396292 kB' 'Shmem: 8665004 kB' 'KernelStack: 8840 kB' 'PageTables: 4992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 186036 kB' 'Slab: 533456 kB' 'SReclaimable: 186036 kB' 'SUnreclaim: 347420 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.826 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:03.827 node0=512 expecting 512 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:03.827 node1=512 expecting 512 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:03.827 00:05:03.827 real 0m2.488s 00:05:03.827 user 0m0.891s 00:05:03.827 sys 0m1.622s 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.827 01:05:26 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:03.827 ************************************ 00:05:03.827 END TEST even_2G_alloc 00:05:03.827 ************************************ 00:05:03.827 01:05:26 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:03.827 01:05:26 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:03.827 01:05:26 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.827 01:05:26 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.827 01:05:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:03.827 ************************************ 00:05:03.827 START TEST odd_alloc 00:05:03.827 ************************************ 00:05:03.827 01:05:26 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:05:03.827 01:05:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:03.827 01:05:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:03.827 01:05:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:03.827 01:05:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:03.827 01:05:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:03.827 01:05:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:03.827 01:05:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:03.827 01:05:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:03.827 01:05:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:03.827 01:05:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:03.827 01:05:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:03.827 01:05:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:03.827 01:05:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:03.827 01:05:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:03.827 01:05:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:03.827 01:05:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:03.827 01:05:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:05:03.827 01:05:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:03.827 01:05:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:03.827 01:05:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:05:03.827 01:05:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:03.827 01:05:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:03.827 01:05:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:03.827 01:05:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:03.827 01:05:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:03.827 01:05:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:03.827 01:05:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.827 01:05:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:05.739 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:05.739 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:05.739 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:05.739 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:05.739 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:05.739 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:05.739 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:05.739 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:05.739 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:05.739 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:05.739 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:05.739 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:05.739 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:06.005 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:06.005 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:06.005 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:06.005 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170564780 kB' 'MemAvailable: 173799836 kB' 'Buffers: 3896 kB' 'Cached: 14671448 kB' 'SwapCached: 0 kB' 'Active: 11542016 kB' 'Inactive: 3694312 kB' 'Active(anon): 11124060 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564176 kB' 'Mapped: 169704 kB' 'Shmem: 10563076 kB' 'KReclaimable: 534008 kB' 'Slab: 1191016 kB' 'SReclaimable: 534008 kB' 'SUnreclaim: 657008 kB' 'KernelStack: 20624 kB' 'PageTables: 8624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029572 kB' 'Committed_AS: 12649408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317320 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3937236 kB' 'DirectMap2M: 33490944 kB' 'DirectMap1G: 164626432 kB' 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.005 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:06.006 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170565516 kB' 'MemAvailable: 173800572 kB' 'Buffers: 3896 kB' 'Cached: 14671460 kB' 'SwapCached: 0 kB' 'Active: 11541620 kB' 'Inactive: 3694312 kB' 'Active(anon): 11123664 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563804 kB' 'Mapped: 169640 kB' 'Shmem: 10563088 kB' 'KReclaimable: 534008 kB' 'Slab: 1191076 kB' 'SReclaimable: 534008 kB' 'SUnreclaim: 657068 kB' 'KernelStack: 20592 kB' 'PageTables: 8524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029572 kB' 'Committed_AS: 12649424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317272 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3937236 kB' 'DirectMap2M: 33490944 kB' 'DirectMap1G: 164626432 kB' 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.007 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.008 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170565768 kB' 'MemAvailable: 173800824 kB' 'Buffers: 3896 kB' 'Cached: 14671464 kB' 'SwapCached: 0 kB' 'Active: 11541664 kB' 'Inactive: 3694312 kB' 'Active(anon): 11123708 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563848 kB' 'Mapped: 169640 kB' 'Shmem: 10563092 kB' 'KReclaimable: 534008 kB' 'Slab: 1191076 kB' 'SReclaimable: 534008 kB' 'SUnreclaim: 657068 kB' 'KernelStack: 20608 kB' 'PageTables: 8568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029572 kB' 'Committed_AS: 12649444 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317272 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3937236 kB' 'DirectMap2M: 33490944 kB' 'DirectMap1G: 164626432 kB' 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.009 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:06.010 nr_hugepages=1025 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:06.010 resv_hugepages=0 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:06.010 surplus_hugepages=0 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:06.010 anon_hugepages=0 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.010 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170565012 kB' 'MemAvailable: 173800068 kB' 'Buffers: 3896 kB' 'Cached: 14671464 kB' 'SwapCached: 0 kB' 'Active: 11541824 kB' 'Inactive: 3694312 kB' 'Active(anon): 11123868 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564024 kB' 'Mapped: 169640 kB' 'Shmem: 10563092 kB' 'KReclaimable: 534008 kB' 'Slab: 1191076 kB' 'SReclaimable: 534008 kB' 'SUnreclaim: 657068 kB' 'KernelStack: 20592 kB' 'PageTables: 8528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029572 kB' 'Committed_AS: 12649464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317288 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3937236 kB' 'DirectMap2M: 33490944 kB' 'DirectMap1G: 164626432 kB' 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.011 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92761580 kB' 'MemUsed: 4854048 kB' 'SwapCached: 0 kB' 'Active: 2222600 kB' 'Inactive: 219552 kB' 'Active(anon): 2060776 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 219552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2279416 kB' 'Mapped: 85152 kB' 'AnonPages: 165808 kB' 'Shmem: 1898040 kB' 'KernelStack: 11736 kB' 'PageTables: 3440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 347972 kB' 'Slab: 657324 kB' 'SReclaimable: 347972 kB' 'SUnreclaim: 309352 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.012 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.013 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765508 kB' 'MemFree: 77803692 kB' 'MemUsed: 15961816 kB' 'SwapCached: 0 kB' 'Active: 9319120 kB' 'Inactive: 3474760 kB' 'Active(anon): 9062988 kB' 'Inactive(anon): 0 kB' 'Active(file): 256132 kB' 'Inactive(file): 3474760 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12396012 kB' 'Mapped: 84488 kB' 'AnonPages: 397988 kB' 'Shmem: 8665120 kB' 'KernelStack: 8856 kB' 'PageTables: 5084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 186036 kB' 'Slab: 533752 kB' 'SReclaimable: 186036 kB' 'SUnreclaim: 347716 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.014 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:05:06.015 node0=512 expecting 513 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:05:06.015 node1=513 expecting 512 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:05:06.015 00:05:06.015 real 0m2.335s 00:05:06.015 user 0m0.818s 00:05:06.015 sys 0m1.393s 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.015 01:05:28 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:06.015 ************************************ 00:05:06.015 END TEST odd_alloc 00:05:06.015 ************************************ 00:05:06.276 01:05:28 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:06.276 01:05:28 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:06.276 01:05:28 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:06.276 01:05:28 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.276 01:05:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:06.276 ************************************ 00:05:06.276 START TEST custom_alloc 00:05:06.276 ************************************ 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.276 01:05:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:08.824 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:08.824 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:08.824 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:08.824 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:08.824 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:08.824 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:08.824 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:08.824 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:08.824 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:08.824 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:08.824 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:08.824 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:08.824 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:08.824 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:08.824 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:08.824 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:08.824 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:08.824 01:05:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:05:08.824 01:05:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:08.824 01:05:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:08.824 01:05:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:08.824 01:05:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:08.824 01:05:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:08.824 01:05:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:08.824 01:05:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:08.824 01:05:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:08.824 01:05:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:08.824 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:08.824 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:08.824 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:08.824 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.824 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.824 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.824 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.824 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.824 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.824 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.824 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.824 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169527300 kB' 'MemAvailable: 172762340 kB' 'Buffers: 3896 kB' 'Cached: 14671592 kB' 'SwapCached: 0 kB' 'Active: 11540600 kB' 'Inactive: 3694312 kB' 'Active(anon): 11122644 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562596 kB' 'Mapped: 169720 kB' 'Shmem: 10563220 kB' 'KReclaimable: 533976 kB' 'Slab: 1191080 kB' 'SReclaimable: 533976 kB' 'SUnreclaim: 657104 kB' 'KernelStack: 20768 kB' 'PageTables: 8976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506308 kB' 'Committed_AS: 12651600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317400 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3937236 kB' 'DirectMap2M: 33490944 kB' 'DirectMap1G: 164626432 kB' 00:05:08.824 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.824 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.824 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.824 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.824 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.824 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.824 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.824 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.824 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.824 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.824 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.824 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.824 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.824 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.824 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.824 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.824 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.824 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.825 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169528068 kB' 'MemAvailable: 172763108 kB' 'Buffers: 3896 kB' 'Cached: 14671596 kB' 'SwapCached: 0 kB' 'Active: 11540512 kB' 'Inactive: 3694312 kB' 'Active(anon): 11122556 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562440 kB' 'Mapped: 169716 kB' 'Shmem: 10563224 kB' 'KReclaimable: 533976 kB' 'Slab: 1191080 kB' 'SReclaimable: 533976 kB' 'SUnreclaim: 657104 kB' 'KernelStack: 20688 kB' 'PageTables: 8904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506308 kB' 'Committed_AS: 12652492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317384 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3937236 kB' 'DirectMap2M: 33490944 kB' 'DirectMap1G: 164626432 kB' 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.826 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.827 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169527892 kB' 'MemAvailable: 172762932 kB' 'Buffers: 3896 kB' 'Cached: 14671612 kB' 'SwapCached: 0 kB' 'Active: 11540492 kB' 'Inactive: 3694312 kB' 'Active(anon): 11122536 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562508 kB' 'Mapped: 169708 kB' 'Shmem: 10563240 kB' 'KReclaimable: 533976 kB' 'Slab: 1191080 kB' 'SReclaimable: 533976 kB' 'SUnreclaim: 657104 kB' 'KernelStack: 20784 kB' 'PageTables: 9128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506308 kB' 'Committed_AS: 12652548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317416 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3937236 kB' 'DirectMap2M: 33490944 kB' 'DirectMap1G: 164626432 kB' 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.828 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.829 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:05:08.830 nr_hugepages=1536 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:08.830 resv_hugepages=0 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:08.830 surplus_hugepages=0 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:08.830 anon_hugepages=0 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:08.830 01:05:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:08.830 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.830 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.830 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.830 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.830 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.830 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.830 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.830 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169526096 kB' 'MemAvailable: 172761136 kB' 'Buffers: 3896 kB' 'Cached: 14671636 kB' 'SwapCached: 0 kB' 'Active: 11540704 kB' 'Inactive: 3694312 kB' 'Active(anon): 11122748 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562628 kB' 'Mapped: 169708 kB' 'Shmem: 10563264 kB' 'KReclaimable: 533976 kB' 'Slab: 1191080 kB' 'SReclaimable: 533976 kB' 'SUnreclaim: 657104 kB' 'KernelStack: 20832 kB' 'PageTables: 9208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506308 kB' 'Committed_AS: 12652784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317384 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3937236 kB' 'DirectMap2M: 33490944 kB' 'DirectMap1G: 164626432 kB' 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.831 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92761256 kB' 'MemUsed: 4854372 kB' 'SwapCached: 0 kB' 'Active: 2222336 kB' 'Inactive: 219552 kB' 'Active(anon): 2060512 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 219552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2279480 kB' 'Mapped: 85160 kB' 'AnonPages: 165504 kB' 'Shmem: 1898104 kB' 'KernelStack: 11800 kB' 'PageTables: 3632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 347940 kB' 'Slab: 657480 kB' 'SReclaimable: 347940 kB' 'SUnreclaim: 309540 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.832 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.833 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765508 kB' 'MemFree: 76763708 kB' 'MemUsed: 17001800 kB' 'SwapCached: 0 kB' 'Active: 9318472 kB' 'Inactive: 3474760 kB' 'Active(anon): 9062340 kB' 'Inactive(anon): 0 kB' 'Active(file): 256132 kB' 'Inactive(file): 3474760 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12396068 kB' 'Mapped: 84548 kB' 'AnonPages: 397212 kB' 'Shmem: 8665176 kB' 'KernelStack: 9080 kB' 'PageTables: 5460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 186036 kB' 'Slab: 533600 kB' 'SReclaimable: 186036 kB' 'SUnreclaim: 347564 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.834 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:08.835 node0=512 expecting 512 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:05:08.835 node1=1024 expecting 1024 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:05:08.835 00:05:08.835 real 0m2.542s 00:05:08.835 user 0m0.967s 00:05:08.835 sys 0m1.601s 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.835 01:05:31 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:08.835 ************************************ 00:05:08.835 END TEST custom_alloc 00:05:08.835 ************************************ 00:05:08.835 01:05:31 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:08.835 01:05:31 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:08.835 01:05:31 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.835 01:05:31 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.835 01:05:31 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:08.835 ************************************ 00:05:08.835 START TEST no_shrink_alloc 00:05:08.835 ************************************ 00:05:08.835 01:05:31 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:05:08.835 01:05:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:08.835 01:05:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:08.835 01:05:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:08.835 01:05:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:08.835 01:05:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:08.835 01:05:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:08.835 01:05:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:08.835 01:05:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:08.835 01:05:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:08.835 01:05:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:08.835 01:05:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:08.836 01:05:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:08.836 01:05:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:08.836 01:05:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:08.836 01:05:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:08.836 01:05:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:08.836 01:05:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:08.836 01:05:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:08.836 01:05:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:08.836 01:05:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:08.836 01:05:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:08.836 01:05:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:11.377 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:11.377 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:11.377 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:11.377 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:11.377 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:11.377 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:11.377 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:11.377 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:11.377 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:11.377 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:11.377 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:11.377 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:11.377 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:11.377 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:11.377 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:11.377 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:11.377 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:11.377 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:11.377 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:11.377 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:11.377 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:11.377 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:11.377 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:11.377 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:11.377 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:11.377 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:11.377 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:11.377 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:11.377 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170563664 kB' 'MemAvailable: 173798704 kB' 'Buffers: 3896 kB' 'Cached: 14671748 kB' 'SwapCached: 0 kB' 'Active: 11540592 kB' 'Inactive: 3694312 kB' 'Active(anon): 11122636 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562452 kB' 'Mapped: 169724 kB' 'Shmem: 10563376 kB' 'KReclaimable: 533976 kB' 'Slab: 1190548 kB' 'SReclaimable: 533976 kB' 'SUnreclaim: 656572 kB' 'KernelStack: 20608 kB' 'PageTables: 8520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12653628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317320 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3937236 kB' 'DirectMap2M: 33490944 kB' 'DirectMap1G: 164626432 kB' 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.378 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170567172 kB' 'MemAvailable: 173802212 kB' 'Buffers: 3896 kB' 'Cached: 14671752 kB' 'SwapCached: 0 kB' 'Active: 11541116 kB' 'Inactive: 3694312 kB' 'Active(anon): 11123160 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563000 kB' 'Mapped: 169724 kB' 'Shmem: 10563380 kB' 'KReclaimable: 533976 kB' 'Slab: 1190548 kB' 'SReclaimable: 533976 kB' 'SUnreclaim: 656572 kB' 'KernelStack: 20736 kB' 'PageTables: 8676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12653648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317288 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3937236 kB' 'DirectMap2M: 33490944 kB' 'DirectMap1G: 164626432 kB' 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.379 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.380 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170565540 kB' 'MemAvailable: 173800580 kB' 'Buffers: 3896 kB' 'Cached: 14671768 kB' 'SwapCached: 0 kB' 'Active: 11541124 kB' 'Inactive: 3694312 kB' 'Active(anon): 11123168 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563032 kB' 'Mapped: 169660 kB' 'Shmem: 10563396 kB' 'KReclaimable: 533976 kB' 'Slab: 1190552 kB' 'SReclaimable: 533976 kB' 'SUnreclaim: 656576 kB' 'KernelStack: 20768 kB' 'PageTables: 8724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12652176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317336 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3937236 kB' 'DirectMap2M: 33490944 kB' 'DirectMap1G: 164626432 kB' 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.381 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.382 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:11.383 nr_hugepages=1024 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:11.383 resv_hugepages=0 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:11.383 surplus_hugepages=0 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:11.383 anon_hugepages=0 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.383 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170565932 kB' 'MemAvailable: 173800972 kB' 'Buffers: 3896 kB' 'Cached: 14671788 kB' 'SwapCached: 0 kB' 'Active: 11541472 kB' 'Inactive: 3694312 kB' 'Active(anon): 11123516 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563364 kB' 'Mapped: 169660 kB' 'Shmem: 10563416 kB' 'KReclaimable: 533976 kB' 'Slab: 1190552 kB' 'SReclaimable: 533976 kB' 'SUnreclaim: 656576 kB' 'KernelStack: 20832 kB' 'PageTables: 9204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12653692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317320 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3937236 kB' 'DirectMap2M: 33490944 kB' 'DirectMap1G: 164626432 kB' 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.384 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 91710140 kB' 'MemUsed: 5905488 kB' 'SwapCached: 0 kB' 'Active: 2222300 kB' 'Inactive: 219552 kB' 'Active(anon): 2060476 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 219552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2279592 kB' 'Mapped: 85172 kB' 'AnonPages: 165408 kB' 'Shmem: 1898216 kB' 'KernelStack: 11752 kB' 'PageTables: 3488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 347940 kB' 'Slab: 656764 kB' 'SReclaimable: 347940 kB' 'SUnreclaim: 308824 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.385 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.386 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.647 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.647 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.647 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.647 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.647 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.647 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.647 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.647 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.647 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.647 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.647 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.647 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.647 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.647 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.647 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.647 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.647 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.647 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.647 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.647 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.647 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.648 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.648 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.648 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.648 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.648 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.648 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.648 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.648 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.648 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.648 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.648 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.648 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.648 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.648 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.648 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.648 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.648 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.648 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.648 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.648 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.648 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.648 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.648 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:11.648 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.648 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.648 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.648 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.648 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:11.648 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:11.648 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:11.648 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:11.648 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:11.648 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:11.648 node0=1024 expecting 1024 00:05:11.648 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:11.648 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:11.648 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:11.648 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:11.648 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.648 01:05:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:14.197 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:14.197 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:14.197 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:14.197 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:14.197 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:14.197 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:14.197 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:14.197 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:14.197 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:14.197 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:14.197 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:14.197 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:14.197 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:14.197 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:14.197 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:14.197 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:14.197 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:14.197 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:14.197 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:14.197 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:14.197 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:14.197 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:14.197 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:14.197 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:14.197 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:14.197 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:14.197 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:14.197 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:14.197 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:14.197 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:14.197 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.197 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.197 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.197 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.197 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.197 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.197 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.197 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.197 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170570640 kB' 'MemAvailable: 173805680 kB' 'Buffers: 3896 kB' 'Cached: 14671872 kB' 'SwapCached: 0 kB' 'Active: 11541148 kB' 'Inactive: 3694312 kB' 'Active(anon): 11123192 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562892 kB' 'Mapped: 169740 kB' 'Shmem: 10563500 kB' 'KReclaimable: 533976 kB' 'Slab: 1190884 kB' 'SReclaimable: 533976 kB' 'SUnreclaim: 656908 kB' 'KernelStack: 20704 kB' 'PageTables: 8880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12652768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317336 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3937236 kB' 'DirectMap2M: 33490944 kB' 'DirectMap1G: 164626432 kB' 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.198 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170575276 kB' 'MemAvailable: 173810316 kB' 'Buffers: 3896 kB' 'Cached: 14671872 kB' 'SwapCached: 0 kB' 'Active: 11541512 kB' 'Inactive: 3694312 kB' 'Active(anon): 11123556 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563304 kB' 'Mapped: 169740 kB' 'Shmem: 10563500 kB' 'KReclaimable: 533976 kB' 'Slab: 1190864 kB' 'SReclaimable: 533976 kB' 'SUnreclaim: 656888 kB' 'KernelStack: 20640 kB' 'PageTables: 8344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12654036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317320 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3937236 kB' 'DirectMap2M: 33490944 kB' 'DirectMap1G: 164626432 kB' 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.199 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.200 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170574952 kB' 'MemAvailable: 173809992 kB' 'Buffers: 3896 kB' 'Cached: 14671896 kB' 'SwapCached: 0 kB' 'Active: 11541092 kB' 'Inactive: 3694312 kB' 'Active(anon): 11123136 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562832 kB' 'Mapped: 169668 kB' 'Shmem: 10563524 kB' 'KReclaimable: 533976 kB' 'Slab: 1190968 kB' 'SReclaimable: 533976 kB' 'SUnreclaim: 656992 kB' 'KernelStack: 20800 kB' 'PageTables: 8736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12652568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317320 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3937236 kB' 'DirectMap2M: 33490944 kB' 'DirectMap1G: 164626432 kB' 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.201 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.202 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:14.203 nr_hugepages=1024 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:14.203 resv_hugepages=0 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:14.203 surplus_hugepages=0 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:14.203 anon_hugepages=0 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170569896 kB' 'MemAvailable: 173804936 kB' 'Buffers: 3896 kB' 'Cached: 14671916 kB' 'SwapCached: 0 kB' 'Active: 11544008 kB' 'Inactive: 3694312 kB' 'Active(anon): 11126052 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565744 kB' 'Mapped: 170172 kB' 'Shmem: 10563544 kB' 'KReclaimable: 533976 kB' 'Slab: 1190968 kB' 'SReclaimable: 533976 kB' 'SUnreclaim: 656992 kB' 'KernelStack: 20832 kB' 'PageTables: 8784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12656072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317320 kB' 'VmallocChunk: 0 kB' 'Percpu: 110976 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3937236 kB' 'DirectMap2M: 33490944 kB' 'DirectMap1G: 164626432 kB' 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.203 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.204 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 91697000 kB' 'MemUsed: 5918628 kB' 'SwapCached: 0 kB' 'Active: 2222440 kB' 'Inactive: 219552 kB' 'Active(anon): 2060616 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 219552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2279716 kB' 'Mapped: 85180 kB' 'AnonPages: 165392 kB' 'Shmem: 1898340 kB' 'KernelStack: 11752 kB' 'PageTables: 3488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 347940 kB' 'Slab: 656868 kB' 'SReclaimable: 347940 kB' 'SUnreclaim: 308928 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.205 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.206 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.207 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.207 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.207 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.207 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.207 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.207 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.207 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.207 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:14.207 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:14.207 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:14.207 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:14.207 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:14.207 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:14.207 node0=1024 expecting 1024 00:05:14.207 01:05:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:14.207 00:05:14.207 real 0m5.506s 00:05:14.207 user 0m2.218s 00:05:14.207 sys 0m3.413s 00:05:14.207 01:05:36 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.207 01:05:36 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:14.207 ************************************ 00:05:14.207 END TEST no_shrink_alloc 00:05:14.207 ************************************ 00:05:14.207 01:05:36 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:14.207 01:05:36 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:14.207 01:05:36 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:14.207 01:05:36 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:14.207 01:05:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:14.207 01:05:36 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:14.489 01:05:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:14.489 01:05:36 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:14.489 01:05:36 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:14.489 01:05:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:14.489 01:05:36 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:14.489 01:05:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:14.489 01:05:36 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:14.489 01:05:36 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:14.489 01:05:36 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:14.489 00:05:14.489 real 0m19.488s 00:05:14.489 user 0m7.036s 00:05:14.489 sys 0m11.500s 00:05:14.489 01:05:36 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.489 01:05:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:14.489 ************************************ 00:05:14.489 END TEST hugepages 00:05:14.489 ************************************ 00:05:14.489 01:05:36 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:14.489 01:05:36 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:14.489 01:05:36 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:14.489 01:05:36 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.489 01:05:36 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:14.489 ************************************ 00:05:14.489 START TEST driver 00:05:14.489 ************************************ 00:05:14.489 01:05:36 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:14.489 * Looking for test storage... 00:05:14.489 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:14.489 01:05:36 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:14.489 01:05:36 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:14.489 01:05:36 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:18.690 01:05:40 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:18.690 01:05:40 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.690 01:05:40 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.690 01:05:40 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:18.690 ************************************ 00:05:18.690 START TEST guess_driver 00:05:18.690 ************************************ 00:05:18.690 01:05:40 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:05:18.690 01:05:40 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:18.690 01:05:40 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:18.690 01:05:40 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:18.690 01:05:40 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:18.690 01:05:40 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:18.690 01:05:40 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:18.690 01:05:40 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:18.690 01:05:40 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:18.690 01:05:40 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:18.690 01:05:40 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:05:18.690 01:05:40 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:18.690 01:05:40 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:18.690 01:05:40 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:18.690 01:05:40 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:18.690 01:05:40 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:18.690 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:18.690 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:18.690 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:18.690 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:18.690 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:18.690 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:18.690 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:18.690 01:05:40 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:18.690 01:05:40 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:18.690 01:05:40 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:18.690 01:05:40 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:18.690 01:05:40 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:18.690 Looking for driver=vfio-pci 00:05:18.690 01:05:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:18.690 01:05:40 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:18.690 01:05:40 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:18.690 01:05:40 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:21.230 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:21.230 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:21.230 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:21.230 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:21.230 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:21.230 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:21.230 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:21.230 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:21.230 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:21.230 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:21.230 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:21.230 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:21.230 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:21.230 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:21.230 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:21.230 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:21.230 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:21.230 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:21.230 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:21.230 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:21.230 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:21.230 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:21.230 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:21.230 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:21.230 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:21.230 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:21.230 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:21.230 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:21.230 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:21.230 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:21.230 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:21.230 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:21.230 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:21.230 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:21.231 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:21.231 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:21.231 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:21.231 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:21.231 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:21.231 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:21.231 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:21.231 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:21.231 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:21.231 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:21.231 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:21.231 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:21.231 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:21.231 01:05:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:22.171 01:05:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:22.171 01:05:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:22.171 01:05:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:22.171 01:05:44 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:22.172 01:05:44 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:22.172 01:05:44 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:22.172 01:05:44 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:26.371 00:05:26.371 real 0m7.416s 00:05:26.371 user 0m2.126s 00:05:26.371 sys 0m3.752s 00:05:26.371 01:05:48 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.371 01:05:48 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:26.371 ************************************ 00:05:26.371 END TEST guess_driver 00:05:26.371 ************************************ 00:05:26.371 01:05:48 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:05:26.371 00:05:26.371 real 0m11.439s 00:05:26.371 user 0m3.277s 00:05:26.371 sys 0m5.922s 00:05:26.371 01:05:48 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.371 01:05:48 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:26.371 ************************************ 00:05:26.371 END TEST driver 00:05:26.371 ************************************ 00:05:26.371 01:05:48 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:26.371 01:05:48 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:26.371 01:05:48 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.371 01:05:48 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.371 01:05:48 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:26.371 ************************************ 00:05:26.371 START TEST devices 00:05:26.371 ************************************ 00:05:26.371 01:05:48 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:26.371 * Looking for test storage... 00:05:26.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:26.371 01:05:48 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:26.371 01:05:48 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:26.371 01:05:48 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:26.371 01:05:48 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:29.665 01:05:51 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:29.665 01:05:51 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:29.665 01:05:51 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:29.665 01:05:51 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:29.665 01:05:51 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:29.665 01:05:51 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:29.665 01:05:51 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:29.665 01:05:51 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:29.665 01:05:51 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:29.665 01:05:51 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:29.665 01:05:51 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:29.665 01:05:51 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:29.665 01:05:51 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:29.665 01:05:51 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:29.665 01:05:51 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:29.665 01:05:51 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:29.665 01:05:51 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:29.665 01:05:51 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:05:29.665 01:05:51 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:05:29.666 01:05:51 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:29.666 01:05:51 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:29.666 01:05:51 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:29.666 No valid GPT data, bailing 00:05:29.666 01:05:51 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:29.666 01:05:51 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:29.666 01:05:51 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:29.666 01:05:51 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:29.666 01:05:51 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:29.666 01:05:51 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:29.666 01:05:51 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:05:29.666 01:05:51 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:05:29.666 01:05:51 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:29.666 01:05:51 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:05:29.666 01:05:51 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:29.666 01:05:51 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:29.666 01:05:51 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:29.666 01:05:51 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.666 01:05:51 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.666 01:05:51 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:29.666 ************************************ 00:05:29.666 START TEST nvme_mount 00:05:29.666 ************************************ 00:05:29.666 01:05:51 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:05:29.666 01:05:51 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:29.666 01:05:51 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:29.666 01:05:51 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:29.666 01:05:51 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:29.666 01:05:51 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:29.666 01:05:51 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:29.666 01:05:51 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:29.666 01:05:51 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:29.666 01:05:51 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:29.666 01:05:51 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:29.666 01:05:51 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:29.666 01:05:51 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:29.666 01:05:51 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:29.666 01:05:51 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:29.666 01:05:51 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:29.666 01:05:51 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:29.666 01:05:51 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:29.666 01:05:51 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:29.666 01:05:51 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:30.236 Creating new GPT entries in memory. 00:05:30.236 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:30.236 other utilities. 00:05:30.236 01:05:52 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:30.236 01:05:52 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:30.236 01:05:52 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:30.236 01:05:52 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:30.236 01:05:52 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:31.174 Creating new GPT entries in memory. 00:05:31.174 The operation has completed successfully. 00:05:31.174 01:05:53 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:31.174 01:05:53 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:31.174 01:05:53 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 701571 00:05:31.174 01:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:31.175 01:05:53 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:31.175 01:05:53 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:31.175 01:05:53 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:31.175 01:05:53 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:31.175 01:05:53 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:31.435 01:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:31.435 01:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:05:31.435 01:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:31.435 01:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:31.435 01:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:31.435 01:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:31.435 01:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:31.435 01:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:31.435 01:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:31.435 01:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.435 01:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:05:31.435 01:05:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:31.435 01:05:53 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:31.435 01:05:53 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:33.977 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.977 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:33.977 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:33.977 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.977 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.977 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.977 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.977 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.977 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.977 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.977 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.977 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.977 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.977 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.977 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.977 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.977 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.977 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.977 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.977 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.977 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.977 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.977 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.977 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.977 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.977 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.977 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.977 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.977 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.977 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.977 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.977 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.977 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.977 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.977 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.977 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.977 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:33.977 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:33.977 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:33.977 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:33.978 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:33.978 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:33.978 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:33.978 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:33.978 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:34.238 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:34.238 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:34.238 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:34.238 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:34.498 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:34.498 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:34.498 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:34.498 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:34.498 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:34.498 01:05:56 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:34.498 01:05:56 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:34.498 01:05:56 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:34.498 01:05:56 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:34.498 01:05:56 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:34.498 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:34.498 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:05:34.498 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:34.498 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:34.498 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:34.498 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:34.498 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:34.498 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:34.498 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:34.498 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.498 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:05:34.498 01:05:56 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:34.498 01:05:56 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:34.498 01:05:56 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:37.041 01:05:59 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:39.023 01:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:39.023 01:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:39.023 01:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:39.023 01:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.023 01:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:39.023 01:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.023 01:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:39.023 01:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.023 01:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:39.023 01:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.023 01:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:39.023 01:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.023 01:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:39.023 01:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.023 01:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:39.023 01:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.023 01:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:39.023 01:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.023 01:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:39.023 01:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.023 01:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:39.023 01:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.023 01:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:39.023 01:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.023 01:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:39.023 01:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.023 01:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:39.023 01:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.023 01:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:39.023 01:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.023 01:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:39.023 01:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.023 01:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:39.023 01:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.023 01:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:39.023 01:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.023 01:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:39.023 01:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:39.023 01:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:39.023 01:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:39.023 01:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:39.283 01:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:39.283 01:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:39.283 01:06:01 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:39.283 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:39.283 00:05:39.283 real 0m9.971s 00:05:39.283 user 0m2.750s 00:05:39.283 sys 0m4.972s 00:05:39.283 01:06:01 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.283 01:06:01 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:39.283 ************************************ 00:05:39.283 END TEST nvme_mount 00:05:39.283 ************************************ 00:05:39.283 01:06:01 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:39.283 01:06:01 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:39.283 01:06:01 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.283 01:06:01 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.283 01:06:01 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:39.283 ************************************ 00:05:39.283 START TEST dm_mount 00:05:39.283 ************************************ 00:05:39.283 01:06:01 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:39.283 01:06:01 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:39.283 01:06:01 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:39.283 01:06:01 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:39.283 01:06:01 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:39.283 01:06:01 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:39.283 01:06:01 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:39.283 01:06:01 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:39.283 01:06:01 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:39.283 01:06:01 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:39.283 01:06:01 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:39.283 01:06:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:39.283 01:06:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:39.283 01:06:01 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:39.283 01:06:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:39.283 01:06:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:39.283 01:06:01 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:39.283 01:06:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:39.283 01:06:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:39.283 01:06:01 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:39.283 01:06:01 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:39.283 01:06:01 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:40.224 Creating new GPT entries in memory. 00:05:40.224 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:40.224 other utilities. 00:05:40.224 01:06:02 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:40.224 01:06:02 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:40.224 01:06:02 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:40.224 01:06:02 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:40.224 01:06:02 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:41.163 Creating new GPT entries in memory. 00:05:41.163 The operation has completed successfully. 00:05:41.163 01:06:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:41.163 01:06:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:41.163 01:06:03 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:41.163 01:06:03 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:41.163 01:06:03 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:42.546 The operation has completed successfully. 00:05:42.546 01:06:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:42.546 01:06:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:42.546 01:06:04 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 705692 00:05:42.546 01:06:04 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:42.546 01:06:04 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:42.546 01:06:04 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:42.546 01:06:04 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:42.546 01:06:04 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:42.546 01:06:04 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:42.546 01:06:04 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:42.546 01:06:04 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:42.546 01:06:04 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:42.546 01:06:04 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:05:42.546 01:06:04 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:05:42.546 01:06:04 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:05:42.546 01:06:04 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:05:42.546 01:06:04 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:42.546 01:06:04 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:42.546 01:06:04 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:42.546 01:06:04 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:42.546 01:06:04 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:42.546 01:06:04 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:42.546 01:06:04 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:42.546 01:06:04 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:05:42.546 01:06:04 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:42.546 01:06:04 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:42.546 01:06:04 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:42.546 01:06:04 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:42.546 01:06:04 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:42.546 01:06:04 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:42.546 01:06:04 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:42.546 01:06:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.546 01:06:04 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:05:42.546 01:06:04 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:42.546 01:06:04 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:42.546 01:06:04 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:45.089 01:06:07 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:46.999 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:46.999 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:05:46.999 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:46.999 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.999 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:46.999 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.999 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:46.999 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.999 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:46.999 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.999 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:46.999 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.999 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:46.999 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.999 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:46.999 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.999 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:46.999 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.999 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:46.999 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.999 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:46.999 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.999 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:46.999 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.999 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:47.000 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.000 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:47.000 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.000 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:47.000 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.000 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:47.000 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.000 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:47.000 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.000 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:47.000 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.000 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:47.000 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:47.000 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:47.000 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:47.000 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:47.000 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:47.000 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:47.260 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:47.260 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:47.260 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:47.260 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:47.260 01:06:09 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:47.260 00:05:47.260 real 0m7.950s 00:05:47.260 user 0m1.692s 00:05:47.260 sys 0m3.168s 00:05:47.260 01:06:09 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.260 01:06:09 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:47.260 ************************************ 00:05:47.260 END TEST dm_mount 00:05:47.260 ************************************ 00:05:47.260 01:06:09 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:47.260 01:06:09 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:47.260 01:06:09 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:47.260 01:06:09 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:47.260 01:06:09 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:47.260 01:06:09 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:47.260 01:06:09 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:47.260 01:06:09 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:47.520 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:47.520 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:47.520 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:47.520 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:47.520 01:06:09 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:47.520 01:06:09 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:47.520 01:06:09 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:47.520 01:06:09 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:47.520 01:06:09 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:47.520 01:06:09 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:47.520 01:06:09 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:47.520 00:05:47.520 real 0m21.582s 00:05:47.520 user 0m5.722s 00:05:47.520 sys 0m10.394s 00:05:47.520 01:06:09 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.520 01:06:09 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:47.520 ************************************ 00:05:47.520 END TEST devices 00:05:47.520 ************************************ 00:05:47.520 01:06:09 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:47.520 00:05:47.520 real 1m11.541s 00:05:47.520 user 0m22.304s 00:05:47.520 sys 0m39.155s 00:05:47.520 01:06:09 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.520 01:06:09 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:47.520 ************************************ 00:05:47.520 END TEST setup.sh 00:05:47.521 ************************************ 00:05:47.521 01:06:09 -- common/autotest_common.sh@1142 -- # return 0 00:05:47.521 01:06:09 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:50.061 Hugepages 00:05:50.061 node hugesize free / total 00:05:50.061 node0 1048576kB 0 / 0 00:05:50.061 node0 2048kB 2048 / 2048 00:05:50.061 node1 1048576kB 0 / 0 00:05:50.061 node1 2048kB 0 / 0 00:05:50.061 00:05:50.061 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:50.061 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:50.061 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:50.061 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:50.061 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:50.061 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:50.061 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:50.061 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:50.061 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:50.061 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:05:50.061 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:50.061 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:50.061 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:50.061 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:50.061 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:50.061 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:50.061 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:50.061 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:50.061 01:06:12 -- spdk/autotest.sh@130 -- # uname -s 00:05:50.061 01:06:12 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:50.061 01:06:12 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:50.061 01:06:12 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:52.623 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:52.623 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:52.623 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:52.623 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:52.623 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:52.623 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:52.623 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:52.623 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:52.623 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:52.623 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:52.623 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:52.623 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:52.623 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:52.623 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:52.623 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:52.623 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:53.192 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:53.452 01:06:15 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:54.391 01:06:16 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:54.391 01:06:16 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:54.391 01:06:16 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:54.391 01:06:16 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:54.391 01:06:16 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:54.391 01:06:16 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:54.391 01:06:16 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:54.391 01:06:16 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:54.391 01:06:16 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:54.391 01:06:16 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:54.391 01:06:16 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:05:54.391 01:06:16 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:56.931 Waiting for block devices as requested 00:05:56.931 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:05:56.931 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:56.931 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:56.931 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:56.931 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:57.191 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:57.191 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:57.191 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:57.191 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:57.451 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:57.451 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:57.451 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:57.711 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:57.711 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:57.711 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:57.711 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:57.973 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:57.973 01:06:20 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:57.973 01:06:20 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:05:57.973 01:06:20 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:57.973 01:06:20 -- common/autotest_common.sh@1502 -- # grep 0000:5e:00.0/nvme/nvme 00:05:57.973 01:06:20 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:05:57.973 01:06:20 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:05:57.973 01:06:20 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:05:57.973 01:06:20 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:57.973 01:06:20 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:57.973 01:06:20 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:57.973 01:06:20 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:57.973 01:06:20 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:57.973 01:06:20 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:57.973 01:06:20 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:05:57.973 01:06:20 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:57.973 01:06:20 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:57.973 01:06:20 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:57.973 01:06:20 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:57.973 01:06:20 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:57.973 01:06:20 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:57.973 01:06:20 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:57.973 01:06:20 -- common/autotest_common.sh@1557 -- # continue 00:05:57.973 01:06:20 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:57.973 01:06:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:57.973 01:06:20 -- common/autotest_common.sh@10 -- # set +x 00:05:57.973 01:06:20 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:57.973 01:06:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:57.973 01:06:20 -- common/autotest_common.sh@10 -- # set +x 00:05:57.973 01:06:20 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:00.585 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:00.585 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:00.585 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:00.585 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:00.585 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:00.585 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:00.585 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:00.585 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:00.585 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:00.585 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:00.585 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:00.585 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:00.585 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:00.585 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:00.585 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:00.585 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:01.524 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:06:01.524 01:06:23 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:01.524 01:06:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:01.524 01:06:23 -- common/autotest_common.sh@10 -- # set +x 00:06:01.524 01:06:23 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:01.524 01:06:23 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:06:01.524 01:06:23 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:06:01.524 01:06:23 -- common/autotest_common.sh@1577 -- # bdfs=() 00:06:01.524 01:06:23 -- common/autotest_common.sh@1577 -- # local bdfs 00:06:01.524 01:06:23 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:06:01.524 01:06:23 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:01.524 01:06:23 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:01.524 01:06:23 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:01.524 01:06:23 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:01.524 01:06:23 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:01.524 01:06:24 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:06:01.524 01:06:24 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:06:01.524 01:06:24 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:01.524 01:06:24 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:06:01.785 01:06:24 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:06:01.785 01:06:24 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:06:01.785 01:06:24 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:06:01.785 01:06:24 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:5e:00.0 00:06:01.785 01:06:24 -- common/autotest_common.sh@1592 -- # [[ -z 0000:5e:00.0 ]] 00:06:01.785 01:06:24 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=714545 00:06:01.785 01:06:24 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:01.785 01:06:24 -- common/autotest_common.sh@1598 -- # waitforlisten 714545 00:06:01.785 01:06:24 -- common/autotest_common.sh@829 -- # '[' -z 714545 ']' 00:06:01.785 01:06:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.785 01:06:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.785 01:06:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.785 01:06:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.785 01:06:24 -- common/autotest_common.sh@10 -- # set +x 00:06:01.785 [2024-07-25 01:06:24.072828] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:06:01.785 [2024-07-25 01:06:24.072870] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid714545 ] 00:06:01.785 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.785 [2024-07-25 01:06:24.125176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.785 [2024-07-25 01:06:24.205000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.723 01:06:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.723 01:06:24 -- common/autotest_common.sh@862 -- # return 0 00:06:02.723 01:06:24 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:06:02.723 01:06:24 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:06:02.723 01:06:24 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:06:06.028 nvme0n1 00:06:06.028 01:06:27 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:06:06.028 [2024-07-25 01:06:28.023864] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:06:06.028 request: 00:06:06.028 { 00:06:06.028 "nvme_ctrlr_name": "nvme0", 00:06:06.028 "password": "test", 00:06:06.028 "method": "bdev_nvme_opal_revert", 00:06:06.028 "req_id": 1 00:06:06.028 } 00:06:06.028 Got JSON-RPC error response 00:06:06.028 response: 00:06:06.028 { 00:06:06.028 "code": -32602, 00:06:06.028 "message": "Invalid parameters" 00:06:06.028 } 00:06:06.028 01:06:28 -- common/autotest_common.sh@1604 -- # true 00:06:06.028 01:06:28 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:06:06.028 01:06:28 -- common/autotest_common.sh@1608 -- # killprocess 714545 00:06:06.028 01:06:28 -- common/autotest_common.sh@948 -- # '[' -z 714545 ']' 00:06:06.028 01:06:28 -- common/autotest_common.sh@952 -- # kill -0 714545 00:06:06.028 01:06:28 -- common/autotest_common.sh@953 -- # uname 00:06:06.028 01:06:28 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:06.028 01:06:28 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 714545 00:06:06.028 01:06:28 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:06.028 01:06:28 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:06.028 01:06:28 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 714545' 00:06:06.028 killing process with pid 714545 00:06:06.028 01:06:28 -- common/autotest_common.sh@967 -- # kill 714545 00:06:06.028 01:06:28 -- common/autotest_common.sh@972 -- # wait 714545 00:06:07.410 01:06:29 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:07.410 01:06:29 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:07.410 01:06:29 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:07.410 01:06:29 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:07.410 01:06:29 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:07.410 01:06:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:07.410 01:06:29 -- common/autotest_common.sh@10 -- # set +x 00:06:07.410 01:06:29 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:06:07.410 01:06:29 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:07.410 01:06:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:07.410 01:06:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.410 01:06:29 -- common/autotest_common.sh@10 -- # set +x 00:06:07.410 ************************************ 00:06:07.410 START TEST env 00:06:07.410 ************************************ 00:06:07.410 01:06:29 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:07.410 * Looking for test storage... 00:06:07.410 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:06:07.410 01:06:29 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:07.410 01:06:29 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:07.410 01:06:29 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.410 01:06:29 env -- common/autotest_common.sh@10 -- # set +x 00:06:07.410 ************************************ 00:06:07.410 START TEST env_memory 00:06:07.410 ************************************ 00:06:07.410 01:06:29 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:07.410 00:06:07.410 00:06:07.410 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.410 http://cunit.sourceforge.net/ 00:06:07.410 00:06:07.410 00:06:07.410 Suite: memory 00:06:07.410 Test: alloc and free memory map ...[2024-07-25 01:06:29.853166] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:07.410 passed 00:06:07.410 Test: mem map translation ...[2024-07-25 01:06:29.872501] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:07.410 [2024-07-25 01:06:29.872515] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:07.410 [2024-07-25 01:06:29.872553] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:07.410 [2024-07-25 01:06:29.872559] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:07.410 passed 00:06:07.671 Test: mem map registration ...[2024-07-25 01:06:29.911486] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:07.671 [2024-07-25 01:06:29.911503] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:07.671 passed 00:06:07.671 Test: mem map adjacent registrations ...passed 00:06:07.671 00:06:07.671 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.671 suites 1 1 n/a 0 0 00:06:07.671 tests 4 4 4 0 0 00:06:07.671 asserts 152 152 152 0 n/a 00:06:07.671 00:06:07.671 Elapsed time = 0.140 seconds 00:06:07.671 00:06:07.671 real 0m0.152s 00:06:07.671 user 0m0.144s 00:06:07.671 sys 0m0.008s 00:06:07.671 01:06:29 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.671 01:06:29 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:07.671 ************************************ 00:06:07.671 END TEST env_memory 00:06:07.671 ************************************ 00:06:07.671 01:06:29 env -- common/autotest_common.sh@1142 -- # return 0 00:06:07.671 01:06:29 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:07.671 01:06:29 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:07.671 01:06:29 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.671 01:06:29 env -- common/autotest_common.sh@10 -- # set +x 00:06:07.671 ************************************ 00:06:07.671 START TEST env_vtophys 00:06:07.671 ************************************ 00:06:07.671 01:06:30 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:07.671 EAL: lib.eal log level changed from notice to debug 00:06:07.671 EAL: Detected lcore 0 as core 0 on socket 0 00:06:07.671 EAL: Detected lcore 1 as core 1 on socket 0 00:06:07.671 EAL: Detected lcore 2 as core 2 on socket 0 00:06:07.671 EAL: Detected lcore 3 as core 3 on socket 0 00:06:07.671 EAL: Detected lcore 4 as core 4 on socket 0 00:06:07.671 EAL: Detected lcore 5 as core 5 on socket 0 00:06:07.671 EAL: Detected lcore 6 as core 6 on socket 0 00:06:07.671 EAL: Detected lcore 7 as core 8 on socket 0 00:06:07.671 EAL: Detected lcore 8 as core 9 on socket 0 00:06:07.671 EAL: Detected lcore 9 as core 10 on socket 0 00:06:07.671 EAL: Detected lcore 10 as core 11 on socket 0 00:06:07.671 EAL: Detected lcore 11 as core 12 on socket 0 00:06:07.671 EAL: Detected lcore 12 as core 13 on socket 0 00:06:07.671 EAL: Detected lcore 13 as core 16 on socket 0 00:06:07.671 EAL: Detected lcore 14 as core 17 on socket 0 00:06:07.671 EAL: Detected lcore 15 as core 18 on socket 0 00:06:07.671 EAL: Detected lcore 16 as core 19 on socket 0 00:06:07.671 EAL: Detected lcore 17 as core 20 on socket 0 00:06:07.671 EAL: Detected lcore 18 as core 21 on socket 0 00:06:07.671 EAL: Detected lcore 19 as core 25 on socket 0 00:06:07.671 EAL: Detected lcore 20 as core 26 on socket 0 00:06:07.671 EAL: Detected lcore 21 as core 27 on socket 0 00:06:07.671 EAL: Detected lcore 22 as core 28 on socket 0 00:06:07.671 EAL: Detected lcore 23 as core 29 on socket 0 00:06:07.671 EAL: Detected lcore 24 as core 0 on socket 1 00:06:07.671 EAL: Detected lcore 25 as core 1 on socket 1 00:06:07.671 EAL: Detected lcore 26 as core 2 on socket 1 00:06:07.671 EAL: Detected lcore 27 as core 3 on socket 1 00:06:07.671 EAL: Detected lcore 28 as core 4 on socket 1 00:06:07.671 EAL: Detected lcore 29 as core 5 on socket 1 00:06:07.671 EAL: Detected lcore 30 as core 6 on socket 1 00:06:07.671 EAL: Detected lcore 31 as core 9 on socket 1 00:06:07.671 EAL: Detected lcore 32 as core 10 on socket 1 00:06:07.671 EAL: Detected lcore 33 as core 11 on socket 1 00:06:07.671 EAL: Detected lcore 34 as core 12 on socket 1 00:06:07.671 EAL: Detected lcore 35 as core 13 on socket 1 00:06:07.671 EAL: Detected lcore 36 as core 16 on socket 1 00:06:07.671 EAL: Detected lcore 37 as core 17 on socket 1 00:06:07.671 EAL: Detected lcore 38 as core 18 on socket 1 00:06:07.671 EAL: Detected lcore 39 as core 19 on socket 1 00:06:07.671 EAL: Detected lcore 40 as core 20 on socket 1 00:06:07.671 EAL: Detected lcore 41 as core 21 on socket 1 00:06:07.671 EAL: Detected lcore 42 as core 24 on socket 1 00:06:07.671 EAL: Detected lcore 43 as core 25 on socket 1 00:06:07.671 EAL: Detected lcore 44 as core 26 on socket 1 00:06:07.671 EAL: Detected lcore 45 as core 27 on socket 1 00:06:07.671 EAL: Detected lcore 46 as core 28 on socket 1 00:06:07.671 EAL: Detected lcore 47 as core 29 on socket 1 00:06:07.671 EAL: Detected lcore 48 as core 0 on socket 0 00:06:07.671 EAL: Detected lcore 49 as core 1 on socket 0 00:06:07.671 EAL: Detected lcore 50 as core 2 on socket 0 00:06:07.671 EAL: Detected lcore 51 as core 3 on socket 0 00:06:07.671 EAL: Detected lcore 52 as core 4 on socket 0 00:06:07.671 EAL: Detected lcore 53 as core 5 on socket 0 00:06:07.671 EAL: Detected lcore 54 as core 6 on socket 0 00:06:07.671 EAL: Detected lcore 55 as core 8 on socket 0 00:06:07.671 EAL: Detected lcore 56 as core 9 on socket 0 00:06:07.671 EAL: Detected lcore 57 as core 10 on socket 0 00:06:07.671 EAL: Detected lcore 58 as core 11 on socket 0 00:06:07.671 EAL: Detected lcore 59 as core 12 on socket 0 00:06:07.671 EAL: Detected lcore 60 as core 13 on socket 0 00:06:07.671 EAL: Detected lcore 61 as core 16 on socket 0 00:06:07.671 EAL: Detected lcore 62 as core 17 on socket 0 00:06:07.671 EAL: Detected lcore 63 as core 18 on socket 0 00:06:07.671 EAL: Detected lcore 64 as core 19 on socket 0 00:06:07.671 EAL: Detected lcore 65 as core 20 on socket 0 00:06:07.671 EAL: Detected lcore 66 as core 21 on socket 0 00:06:07.671 EAL: Detected lcore 67 as core 25 on socket 0 00:06:07.671 EAL: Detected lcore 68 as core 26 on socket 0 00:06:07.671 EAL: Detected lcore 69 as core 27 on socket 0 00:06:07.671 EAL: Detected lcore 70 as core 28 on socket 0 00:06:07.671 EAL: Detected lcore 71 as core 29 on socket 0 00:06:07.671 EAL: Detected lcore 72 as core 0 on socket 1 00:06:07.671 EAL: Detected lcore 73 as core 1 on socket 1 00:06:07.671 EAL: Detected lcore 74 as core 2 on socket 1 00:06:07.671 EAL: Detected lcore 75 as core 3 on socket 1 00:06:07.671 EAL: Detected lcore 76 as core 4 on socket 1 00:06:07.671 EAL: Detected lcore 77 as core 5 on socket 1 00:06:07.672 EAL: Detected lcore 78 as core 6 on socket 1 00:06:07.672 EAL: Detected lcore 79 as core 9 on socket 1 00:06:07.672 EAL: Detected lcore 80 as core 10 on socket 1 00:06:07.672 EAL: Detected lcore 81 as core 11 on socket 1 00:06:07.672 EAL: Detected lcore 82 as core 12 on socket 1 00:06:07.672 EAL: Detected lcore 83 as core 13 on socket 1 00:06:07.672 EAL: Detected lcore 84 as core 16 on socket 1 00:06:07.672 EAL: Detected lcore 85 as core 17 on socket 1 00:06:07.672 EAL: Detected lcore 86 as core 18 on socket 1 00:06:07.672 EAL: Detected lcore 87 as core 19 on socket 1 00:06:07.672 EAL: Detected lcore 88 as core 20 on socket 1 00:06:07.672 EAL: Detected lcore 89 as core 21 on socket 1 00:06:07.672 EAL: Detected lcore 90 as core 24 on socket 1 00:06:07.672 EAL: Detected lcore 91 as core 25 on socket 1 00:06:07.672 EAL: Detected lcore 92 as core 26 on socket 1 00:06:07.672 EAL: Detected lcore 93 as core 27 on socket 1 00:06:07.672 EAL: Detected lcore 94 as core 28 on socket 1 00:06:07.672 EAL: Detected lcore 95 as core 29 on socket 1 00:06:07.672 EAL: Maximum logical cores by configuration: 128 00:06:07.672 EAL: Detected CPU lcores: 96 00:06:07.672 EAL: Detected NUMA nodes: 2 00:06:07.672 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:07.672 EAL: Detected shared linkage of DPDK 00:06:07.672 EAL: No shared files mode enabled, IPC will be disabled 00:06:07.672 EAL: Bus pci wants IOVA as 'DC' 00:06:07.672 EAL: Buses did not request a specific IOVA mode. 00:06:07.672 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:07.672 EAL: Selected IOVA mode 'VA' 00:06:07.672 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.672 EAL: Probing VFIO support... 00:06:07.672 EAL: IOMMU type 1 (Type 1) is supported 00:06:07.672 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:07.672 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:07.672 EAL: VFIO support initialized 00:06:07.672 EAL: Ask a virtual area of 0x2e000 bytes 00:06:07.672 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:07.672 EAL: Setting up physically contiguous memory... 00:06:07.672 EAL: Setting maximum number of open files to 524288 00:06:07.672 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:07.672 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:07.672 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:07.672 EAL: Ask a virtual area of 0x61000 bytes 00:06:07.672 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:07.672 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:07.672 EAL: Ask a virtual area of 0x400000000 bytes 00:06:07.672 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:07.672 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:07.672 EAL: Ask a virtual area of 0x61000 bytes 00:06:07.672 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:07.672 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:07.672 EAL: Ask a virtual area of 0x400000000 bytes 00:06:07.672 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:07.672 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:07.672 EAL: Ask a virtual area of 0x61000 bytes 00:06:07.672 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:07.672 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:07.672 EAL: Ask a virtual area of 0x400000000 bytes 00:06:07.672 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:07.672 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:07.672 EAL: Ask a virtual area of 0x61000 bytes 00:06:07.672 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:07.672 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:07.672 EAL: Ask a virtual area of 0x400000000 bytes 00:06:07.672 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:07.672 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:07.672 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:07.672 EAL: Ask a virtual area of 0x61000 bytes 00:06:07.672 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:07.672 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:07.672 EAL: Ask a virtual area of 0x400000000 bytes 00:06:07.672 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:07.672 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:07.672 EAL: Ask a virtual area of 0x61000 bytes 00:06:07.672 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:07.672 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:07.672 EAL: Ask a virtual area of 0x400000000 bytes 00:06:07.672 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:07.672 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:07.672 EAL: Ask a virtual area of 0x61000 bytes 00:06:07.672 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:07.672 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:07.672 EAL: Ask a virtual area of 0x400000000 bytes 00:06:07.672 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:07.672 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:07.672 EAL: Ask a virtual area of 0x61000 bytes 00:06:07.672 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:07.672 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:07.672 EAL: Ask a virtual area of 0x400000000 bytes 00:06:07.672 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:07.672 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:07.672 EAL: Hugepages will be freed exactly as allocated. 00:06:07.672 EAL: No shared files mode enabled, IPC is disabled 00:06:07.672 EAL: No shared files mode enabled, IPC is disabled 00:06:07.672 EAL: TSC frequency is ~2300000 KHz 00:06:07.672 EAL: Main lcore 0 is ready (tid=7fb041967a00;cpuset=[0]) 00:06:07.672 EAL: Trying to obtain current memory policy. 00:06:07.672 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.672 EAL: Restoring previous memory policy: 0 00:06:07.672 EAL: request: mp_malloc_sync 00:06:07.672 EAL: No shared files mode enabled, IPC is disabled 00:06:07.672 EAL: Heap on socket 0 was expanded by 2MB 00:06:07.672 EAL: No shared files mode enabled, IPC is disabled 00:06:07.672 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:07.672 EAL: Mem event callback 'spdk:(nil)' registered 00:06:07.672 00:06:07.672 00:06:07.672 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.672 http://cunit.sourceforge.net/ 00:06:07.672 00:06:07.672 00:06:07.672 Suite: components_suite 00:06:07.672 Test: vtophys_malloc_test ...passed 00:06:07.672 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:07.672 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.672 EAL: Restoring previous memory policy: 4 00:06:07.672 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.672 EAL: request: mp_malloc_sync 00:06:07.672 EAL: No shared files mode enabled, IPC is disabled 00:06:07.672 EAL: Heap on socket 0 was expanded by 4MB 00:06:07.672 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.672 EAL: request: mp_malloc_sync 00:06:07.672 EAL: No shared files mode enabled, IPC is disabled 00:06:07.672 EAL: Heap on socket 0 was shrunk by 4MB 00:06:07.672 EAL: Trying to obtain current memory policy. 00:06:07.672 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.672 EAL: Restoring previous memory policy: 4 00:06:07.672 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.672 EAL: request: mp_malloc_sync 00:06:07.672 EAL: No shared files mode enabled, IPC is disabled 00:06:07.672 EAL: Heap on socket 0 was expanded by 6MB 00:06:07.672 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.672 EAL: request: mp_malloc_sync 00:06:07.672 EAL: No shared files mode enabled, IPC is disabled 00:06:07.672 EAL: Heap on socket 0 was shrunk by 6MB 00:06:07.672 EAL: Trying to obtain current memory policy. 00:06:07.672 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.672 EAL: Restoring previous memory policy: 4 00:06:07.672 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.672 EAL: request: mp_malloc_sync 00:06:07.672 EAL: No shared files mode enabled, IPC is disabled 00:06:07.672 EAL: Heap on socket 0 was expanded by 10MB 00:06:07.672 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.672 EAL: request: mp_malloc_sync 00:06:07.672 EAL: No shared files mode enabled, IPC is disabled 00:06:07.672 EAL: Heap on socket 0 was shrunk by 10MB 00:06:07.672 EAL: Trying to obtain current memory policy. 00:06:07.672 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.672 EAL: Restoring previous memory policy: 4 00:06:07.672 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.672 EAL: request: mp_malloc_sync 00:06:07.672 EAL: No shared files mode enabled, IPC is disabled 00:06:07.672 EAL: Heap on socket 0 was expanded by 18MB 00:06:07.672 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.672 EAL: request: mp_malloc_sync 00:06:07.672 EAL: No shared files mode enabled, IPC is disabled 00:06:07.672 EAL: Heap on socket 0 was shrunk by 18MB 00:06:07.672 EAL: Trying to obtain current memory policy. 00:06:07.672 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.672 EAL: Restoring previous memory policy: 4 00:06:07.672 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.672 EAL: request: mp_malloc_sync 00:06:07.672 EAL: No shared files mode enabled, IPC is disabled 00:06:07.672 EAL: Heap on socket 0 was expanded by 34MB 00:06:07.672 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.672 EAL: request: mp_malloc_sync 00:06:07.672 EAL: No shared files mode enabled, IPC is disabled 00:06:07.672 EAL: Heap on socket 0 was shrunk by 34MB 00:06:07.672 EAL: Trying to obtain current memory policy. 00:06:07.672 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.672 EAL: Restoring previous memory policy: 4 00:06:07.672 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.672 EAL: request: mp_malloc_sync 00:06:07.672 EAL: No shared files mode enabled, IPC is disabled 00:06:07.672 EAL: Heap on socket 0 was expanded by 66MB 00:06:07.672 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.672 EAL: request: mp_malloc_sync 00:06:07.672 EAL: No shared files mode enabled, IPC is disabled 00:06:07.672 EAL: Heap on socket 0 was shrunk by 66MB 00:06:07.672 EAL: Trying to obtain current memory policy. 00:06:07.672 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.933 EAL: Restoring previous memory policy: 4 00:06:07.933 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.933 EAL: request: mp_malloc_sync 00:06:07.933 EAL: No shared files mode enabled, IPC is disabled 00:06:07.933 EAL: Heap on socket 0 was expanded by 130MB 00:06:07.933 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.933 EAL: request: mp_malloc_sync 00:06:07.933 EAL: No shared files mode enabled, IPC is disabled 00:06:07.933 EAL: Heap on socket 0 was shrunk by 130MB 00:06:07.933 EAL: Trying to obtain current memory policy. 00:06:07.933 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.933 EAL: Restoring previous memory policy: 4 00:06:07.933 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.933 EAL: request: mp_malloc_sync 00:06:07.933 EAL: No shared files mode enabled, IPC is disabled 00:06:07.933 EAL: Heap on socket 0 was expanded by 258MB 00:06:07.933 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.933 EAL: request: mp_malloc_sync 00:06:07.933 EAL: No shared files mode enabled, IPC is disabled 00:06:07.933 EAL: Heap on socket 0 was shrunk by 258MB 00:06:07.933 EAL: Trying to obtain current memory policy. 00:06:07.933 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:08.192 EAL: Restoring previous memory policy: 4 00:06:08.192 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.192 EAL: request: mp_malloc_sync 00:06:08.192 EAL: No shared files mode enabled, IPC is disabled 00:06:08.192 EAL: Heap on socket 0 was expanded by 514MB 00:06:08.192 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.192 EAL: request: mp_malloc_sync 00:06:08.192 EAL: No shared files mode enabled, IPC is disabled 00:06:08.192 EAL: Heap on socket 0 was shrunk by 514MB 00:06:08.192 EAL: Trying to obtain current memory policy. 00:06:08.192 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:08.452 EAL: Restoring previous memory policy: 4 00:06:08.452 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.452 EAL: request: mp_malloc_sync 00:06:08.452 EAL: No shared files mode enabled, IPC is disabled 00:06:08.452 EAL: Heap on socket 0 was expanded by 1026MB 00:06:08.452 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.712 EAL: request: mp_malloc_sync 00:06:08.712 EAL: No shared files mode enabled, IPC is disabled 00:06:08.712 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:08.712 passed 00:06:08.712 00:06:08.712 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.712 suites 1 1 n/a 0 0 00:06:08.712 tests 2 2 2 0 0 00:06:08.712 asserts 497 497 497 0 n/a 00:06:08.712 00:06:08.712 Elapsed time = 0.958 seconds 00:06:08.712 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.712 EAL: request: mp_malloc_sync 00:06:08.712 EAL: No shared files mode enabled, IPC is disabled 00:06:08.712 EAL: Heap on socket 0 was shrunk by 2MB 00:06:08.712 EAL: No shared files mode enabled, IPC is disabled 00:06:08.712 EAL: No shared files mode enabled, IPC is disabled 00:06:08.712 EAL: No shared files mode enabled, IPC is disabled 00:06:08.712 00:06:08.712 real 0m1.065s 00:06:08.712 user 0m0.624s 00:06:08.712 sys 0m0.414s 00:06:08.712 01:06:31 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.712 01:06:31 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:08.712 ************************************ 00:06:08.712 END TEST env_vtophys 00:06:08.712 ************************************ 00:06:08.712 01:06:31 env -- common/autotest_common.sh@1142 -- # return 0 00:06:08.712 01:06:31 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:08.712 01:06:31 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:08.712 01:06:31 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.712 01:06:31 env -- common/autotest_common.sh@10 -- # set +x 00:06:08.712 ************************************ 00:06:08.712 START TEST env_pci 00:06:08.712 ************************************ 00:06:08.712 01:06:31 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:08.712 00:06:08.712 00:06:08.712 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.712 http://cunit.sourceforge.net/ 00:06:08.712 00:06:08.712 00:06:08.712 Suite: pci 00:06:08.712 Test: pci_hook ...[2024-07-25 01:06:31.176354] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 715964 has claimed it 00:06:08.712 EAL: Cannot find device (10000:00:01.0) 00:06:08.712 EAL: Failed to attach device on primary process 00:06:08.712 passed 00:06:08.712 00:06:08.712 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.712 suites 1 1 n/a 0 0 00:06:08.712 tests 1 1 1 0 0 00:06:08.712 asserts 25 25 25 0 n/a 00:06:08.712 00:06:08.712 Elapsed time = 0.026 seconds 00:06:08.712 00:06:08.712 real 0m0.043s 00:06:08.712 user 0m0.011s 00:06:08.712 sys 0m0.032s 00:06:08.712 01:06:31 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.712 01:06:31 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:08.712 ************************************ 00:06:08.712 END TEST env_pci 00:06:08.712 ************************************ 00:06:08.972 01:06:31 env -- common/autotest_common.sh@1142 -- # return 0 00:06:08.972 01:06:31 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:08.972 01:06:31 env -- env/env.sh@15 -- # uname 00:06:08.972 01:06:31 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:08.972 01:06:31 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:08.972 01:06:31 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:08.972 01:06:31 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:06:08.972 01:06:31 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.972 01:06:31 env -- common/autotest_common.sh@10 -- # set +x 00:06:08.972 ************************************ 00:06:08.972 START TEST env_dpdk_post_init 00:06:08.972 ************************************ 00:06:08.972 01:06:31 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:08.972 EAL: Detected CPU lcores: 96 00:06:08.972 EAL: Detected NUMA nodes: 2 00:06:08.972 EAL: Detected shared linkage of DPDK 00:06:08.972 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:08.972 EAL: Selected IOVA mode 'VA' 00:06:08.972 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.972 EAL: VFIO support initialized 00:06:08.972 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:08.972 EAL: Using IOMMU type 1 (Type 1) 00:06:08.972 EAL: Ignore mapping IO port bar(1) 00:06:08.972 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:06:08.972 EAL: Ignore mapping IO port bar(1) 00:06:08.972 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:06:08.972 EAL: Ignore mapping IO port bar(1) 00:06:08.972 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:06:08.972 EAL: Ignore mapping IO port bar(1) 00:06:08.972 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:06:08.972 EAL: Ignore mapping IO port bar(1) 00:06:08.972 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:06:08.972 EAL: Ignore mapping IO port bar(1) 00:06:08.972 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:06:08.972 EAL: Ignore mapping IO port bar(1) 00:06:08.972 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:06:09.232 EAL: Ignore mapping IO port bar(1) 00:06:09.232 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:06:09.802 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:06:09.802 EAL: Ignore mapping IO port bar(1) 00:06:09.802 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:06:09.802 EAL: Ignore mapping IO port bar(1) 00:06:09.802 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:06:09.802 EAL: Ignore mapping IO port bar(1) 00:06:09.802 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:06:09.802 EAL: Ignore mapping IO port bar(1) 00:06:09.802 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:06:09.802 EAL: Ignore mapping IO port bar(1) 00:06:09.802 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:06:09.802 EAL: Ignore mapping IO port bar(1) 00:06:09.802 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:06:10.062 EAL: Ignore mapping IO port bar(1) 00:06:10.062 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:06:10.062 EAL: Ignore mapping IO port bar(1) 00:06:10.062 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:06:13.367 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:06:13.367 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:06:13.367 Starting DPDK initialization... 00:06:13.367 Starting SPDK post initialization... 00:06:13.367 SPDK NVMe probe 00:06:13.367 Attaching to 0000:5e:00.0 00:06:13.367 Attached to 0000:5e:00.0 00:06:13.367 Cleaning up... 00:06:13.367 00:06:13.367 real 0m4.339s 00:06:13.367 user 0m3.291s 00:06:13.367 sys 0m0.117s 00:06:13.367 01:06:35 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.367 01:06:35 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:13.367 ************************************ 00:06:13.367 END TEST env_dpdk_post_init 00:06:13.367 ************************************ 00:06:13.367 01:06:35 env -- common/autotest_common.sh@1142 -- # return 0 00:06:13.367 01:06:35 env -- env/env.sh@26 -- # uname 00:06:13.367 01:06:35 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:13.367 01:06:35 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:13.367 01:06:35 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:13.367 01:06:35 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.367 01:06:35 env -- common/autotest_common.sh@10 -- # set +x 00:06:13.367 ************************************ 00:06:13.367 START TEST env_mem_callbacks 00:06:13.367 ************************************ 00:06:13.367 01:06:35 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:13.367 EAL: Detected CPU lcores: 96 00:06:13.367 EAL: Detected NUMA nodes: 2 00:06:13.367 EAL: Detected shared linkage of DPDK 00:06:13.367 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:13.367 EAL: Selected IOVA mode 'VA' 00:06:13.367 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.367 EAL: VFIO support initialized 00:06:13.368 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:13.368 00:06:13.368 00:06:13.368 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.368 http://cunit.sourceforge.net/ 00:06:13.368 00:06:13.368 00:06:13.368 Suite: memory 00:06:13.368 Test: test ... 00:06:13.368 register 0x200000200000 2097152 00:06:13.368 malloc 3145728 00:06:13.368 register 0x200000400000 4194304 00:06:13.368 buf 0x200000500000 len 3145728 PASSED 00:06:13.368 malloc 64 00:06:13.368 buf 0x2000004fff40 len 64 PASSED 00:06:13.368 malloc 4194304 00:06:13.368 register 0x200000800000 6291456 00:06:13.368 buf 0x200000a00000 len 4194304 PASSED 00:06:13.368 free 0x200000500000 3145728 00:06:13.368 free 0x2000004fff40 64 00:06:13.368 unregister 0x200000400000 4194304 PASSED 00:06:13.368 free 0x200000a00000 4194304 00:06:13.368 unregister 0x200000800000 6291456 PASSED 00:06:13.368 malloc 8388608 00:06:13.368 register 0x200000400000 10485760 00:06:13.368 buf 0x200000600000 len 8388608 PASSED 00:06:13.368 free 0x200000600000 8388608 00:06:13.368 unregister 0x200000400000 10485760 PASSED 00:06:13.368 passed 00:06:13.368 00:06:13.368 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.368 suites 1 1 n/a 0 0 00:06:13.368 tests 1 1 1 0 0 00:06:13.368 asserts 15 15 15 0 n/a 00:06:13.368 00:06:13.368 Elapsed time = 0.005 seconds 00:06:13.368 00:06:13.368 real 0m0.052s 00:06:13.368 user 0m0.017s 00:06:13.368 sys 0m0.035s 00:06:13.368 01:06:35 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.368 01:06:35 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:13.368 ************************************ 00:06:13.368 END TEST env_mem_callbacks 00:06:13.368 ************************************ 00:06:13.368 01:06:35 env -- common/autotest_common.sh@1142 -- # return 0 00:06:13.368 00:06:13.368 real 0m6.071s 00:06:13.368 user 0m4.253s 00:06:13.368 sys 0m0.887s 00:06:13.368 01:06:35 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.368 01:06:35 env -- common/autotest_common.sh@10 -- # set +x 00:06:13.368 ************************************ 00:06:13.368 END TEST env 00:06:13.368 ************************************ 00:06:13.368 01:06:35 -- common/autotest_common.sh@1142 -- # return 0 00:06:13.368 01:06:35 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:13.368 01:06:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:13.368 01:06:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.368 01:06:35 -- common/autotest_common.sh@10 -- # set +x 00:06:13.368 ************************************ 00:06:13.368 START TEST rpc 00:06:13.368 ************************************ 00:06:13.368 01:06:35 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:13.628 * Looking for test storage... 00:06:13.628 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:13.628 01:06:35 rpc -- rpc/rpc.sh@65 -- # spdk_pid=716814 00:06:13.628 01:06:35 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:13.628 01:06:35 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:13.628 01:06:35 rpc -- rpc/rpc.sh@67 -- # waitforlisten 716814 00:06:13.628 01:06:35 rpc -- common/autotest_common.sh@829 -- # '[' -z 716814 ']' 00:06:13.628 01:06:35 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.628 01:06:35 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.628 01:06:35 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.628 01:06:35 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.628 01:06:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.628 [2024-07-25 01:06:35.955428] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:06:13.628 [2024-07-25 01:06:35.955470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid716814 ] 00:06:13.628 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.628 [2024-07-25 01:06:36.008951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.628 [2024-07-25 01:06:36.082301] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:13.628 [2024-07-25 01:06:36.082341] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 716814' to capture a snapshot of events at runtime. 00:06:13.628 [2024-07-25 01:06:36.082347] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:13.628 [2024-07-25 01:06:36.082353] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:13.628 [2024-07-25 01:06:36.082358] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid716814 for offline analysis/debug. 00:06:13.628 [2024-07-25 01:06:36.082394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.569 01:06:36 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.569 01:06:36 rpc -- common/autotest_common.sh@862 -- # return 0 00:06:14.569 01:06:36 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:14.569 01:06:36 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:14.569 01:06:36 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:14.569 01:06:36 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:14.569 01:06:36 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:14.569 01:06:36 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.569 01:06:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.569 ************************************ 00:06:14.569 START TEST rpc_integrity 00:06:14.569 ************************************ 00:06:14.569 01:06:36 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:06:14.569 01:06:36 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:14.569 01:06:36 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.569 01:06:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:14.569 01:06:36 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.569 01:06:36 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:14.569 01:06:36 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:14.569 01:06:36 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:14.569 01:06:36 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:14.569 01:06:36 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.569 01:06:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:14.569 01:06:36 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.569 01:06:36 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:14.569 01:06:36 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:14.569 01:06:36 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.569 01:06:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:14.569 01:06:36 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.569 01:06:36 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:14.569 { 00:06:14.569 "name": "Malloc0", 00:06:14.569 "aliases": [ 00:06:14.569 "a7edd543-94d8-4748-b08f-3be162c41f0b" 00:06:14.569 ], 00:06:14.569 "product_name": "Malloc disk", 00:06:14.569 "block_size": 512, 00:06:14.569 "num_blocks": 16384, 00:06:14.569 "uuid": "a7edd543-94d8-4748-b08f-3be162c41f0b", 00:06:14.569 "assigned_rate_limits": { 00:06:14.569 "rw_ios_per_sec": 0, 00:06:14.569 "rw_mbytes_per_sec": 0, 00:06:14.569 "r_mbytes_per_sec": 0, 00:06:14.569 "w_mbytes_per_sec": 0 00:06:14.569 }, 00:06:14.569 "claimed": false, 00:06:14.569 "zoned": false, 00:06:14.569 "supported_io_types": { 00:06:14.569 "read": true, 00:06:14.569 "write": true, 00:06:14.569 "unmap": true, 00:06:14.569 "flush": true, 00:06:14.569 "reset": true, 00:06:14.569 "nvme_admin": false, 00:06:14.569 "nvme_io": false, 00:06:14.569 "nvme_io_md": false, 00:06:14.569 "write_zeroes": true, 00:06:14.569 "zcopy": true, 00:06:14.569 "get_zone_info": false, 00:06:14.569 "zone_management": false, 00:06:14.569 "zone_append": false, 00:06:14.569 "compare": false, 00:06:14.569 "compare_and_write": false, 00:06:14.569 "abort": true, 00:06:14.569 "seek_hole": false, 00:06:14.569 "seek_data": false, 00:06:14.569 "copy": true, 00:06:14.569 "nvme_iov_md": false 00:06:14.569 }, 00:06:14.569 "memory_domains": [ 00:06:14.569 { 00:06:14.569 "dma_device_id": "system", 00:06:14.569 "dma_device_type": 1 00:06:14.569 }, 00:06:14.569 { 00:06:14.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:14.569 "dma_device_type": 2 00:06:14.569 } 00:06:14.569 ], 00:06:14.569 "driver_specific": {} 00:06:14.569 } 00:06:14.569 ]' 00:06:14.569 01:06:36 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:14.569 01:06:36 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:14.569 01:06:36 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:14.569 01:06:36 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.569 01:06:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:14.569 [2024-07-25 01:06:36.903621] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:14.569 [2024-07-25 01:06:36.903653] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:14.569 [2024-07-25 01:06:36.903664] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22c22d0 00:06:14.569 [2024-07-25 01:06:36.903671] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:14.569 [2024-07-25 01:06:36.904746] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:14.569 [2024-07-25 01:06:36.904766] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:14.569 Passthru0 00:06:14.569 01:06:36 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.569 01:06:36 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:14.569 01:06:36 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.569 01:06:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:14.569 01:06:36 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.569 01:06:36 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:14.569 { 00:06:14.569 "name": "Malloc0", 00:06:14.569 "aliases": [ 00:06:14.569 "a7edd543-94d8-4748-b08f-3be162c41f0b" 00:06:14.569 ], 00:06:14.569 "product_name": "Malloc disk", 00:06:14.569 "block_size": 512, 00:06:14.569 "num_blocks": 16384, 00:06:14.569 "uuid": "a7edd543-94d8-4748-b08f-3be162c41f0b", 00:06:14.569 "assigned_rate_limits": { 00:06:14.569 "rw_ios_per_sec": 0, 00:06:14.569 "rw_mbytes_per_sec": 0, 00:06:14.569 "r_mbytes_per_sec": 0, 00:06:14.569 "w_mbytes_per_sec": 0 00:06:14.569 }, 00:06:14.569 "claimed": true, 00:06:14.569 "claim_type": "exclusive_write", 00:06:14.569 "zoned": false, 00:06:14.569 "supported_io_types": { 00:06:14.569 "read": true, 00:06:14.570 "write": true, 00:06:14.570 "unmap": true, 00:06:14.570 "flush": true, 00:06:14.570 "reset": true, 00:06:14.570 "nvme_admin": false, 00:06:14.570 "nvme_io": false, 00:06:14.570 "nvme_io_md": false, 00:06:14.570 "write_zeroes": true, 00:06:14.570 "zcopy": true, 00:06:14.570 "get_zone_info": false, 00:06:14.570 "zone_management": false, 00:06:14.570 "zone_append": false, 00:06:14.570 "compare": false, 00:06:14.570 "compare_and_write": false, 00:06:14.570 "abort": true, 00:06:14.570 "seek_hole": false, 00:06:14.570 "seek_data": false, 00:06:14.570 "copy": true, 00:06:14.570 "nvme_iov_md": false 00:06:14.570 }, 00:06:14.570 "memory_domains": [ 00:06:14.570 { 00:06:14.570 "dma_device_id": "system", 00:06:14.570 "dma_device_type": 1 00:06:14.570 }, 00:06:14.570 { 00:06:14.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:14.570 "dma_device_type": 2 00:06:14.570 } 00:06:14.570 ], 00:06:14.570 "driver_specific": {} 00:06:14.570 }, 00:06:14.570 { 00:06:14.570 "name": "Passthru0", 00:06:14.570 "aliases": [ 00:06:14.570 "f5735691-4eb0-5b5c-94d7-8c1365a6c732" 00:06:14.570 ], 00:06:14.570 "product_name": "passthru", 00:06:14.570 "block_size": 512, 00:06:14.570 "num_blocks": 16384, 00:06:14.570 "uuid": "f5735691-4eb0-5b5c-94d7-8c1365a6c732", 00:06:14.570 "assigned_rate_limits": { 00:06:14.570 "rw_ios_per_sec": 0, 00:06:14.570 "rw_mbytes_per_sec": 0, 00:06:14.570 "r_mbytes_per_sec": 0, 00:06:14.570 "w_mbytes_per_sec": 0 00:06:14.570 }, 00:06:14.570 "claimed": false, 00:06:14.570 "zoned": false, 00:06:14.570 "supported_io_types": { 00:06:14.570 "read": true, 00:06:14.570 "write": true, 00:06:14.570 "unmap": true, 00:06:14.570 "flush": true, 00:06:14.570 "reset": true, 00:06:14.570 "nvme_admin": false, 00:06:14.570 "nvme_io": false, 00:06:14.570 "nvme_io_md": false, 00:06:14.570 "write_zeroes": true, 00:06:14.570 "zcopy": true, 00:06:14.570 "get_zone_info": false, 00:06:14.570 "zone_management": false, 00:06:14.570 "zone_append": false, 00:06:14.570 "compare": false, 00:06:14.570 "compare_and_write": false, 00:06:14.570 "abort": true, 00:06:14.570 "seek_hole": false, 00:06:14.570 "seek_data": false, 00:06:14.570 "copy": true, 00:06:14.570 "nvme_iov_md": false 00:06:14.570 }, 00:06:14.570 "memory_domains": [ 00:06:14.570 { 00:06:14.570 "dma_device_id": "system", 00:06:14.570 "dma_device_type": 1 00:06:14.570 }, 00:06:14.570 { 00:06:14.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:14.570 "dma_device_type": 2 00:06:14.570 } 00:06:14.570 ], 00:06:14.570 "driver_specific": { 00:06:14.570 "passthru": { 00:06:14.570 "name": "Passthru0", 00:06:14.570 "base_bdev_name": "Malloc0" 00:06:14.570 } 00:06:14.570 } 00:06:14.570 } 00:06:14.570 ]' 00:06:14.570 01:06:36 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:14.570 01:06:36 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:14.570 01:06:36 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:14.570 01:06:36 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.570 01:06:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:14.570 01:06:36 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.570 01:06:36 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:14.570 01:06:36 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.570 01:06:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:14.570 01:06:36 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.570 01:06:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:14.570 01:06:36 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.570 01:06:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:14.570 01:06:36 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.570 01:06:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:14.570 01:06:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:14.570 01:06:37 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:14.570 00:06:14.570 real 0m0.251s 00:06:14.570 user 0m0.156s 00:06:14.570 sys 0m0.029s 00:06:14.570 01:06:37 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.570 01:06:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:14.570 ************************************ 00:06:14.570 END TEST rpc_integrity 00:06:14.570 ************************************ 00:06:14.570 01:06:37 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:14.570 01:06:37 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:14.570 01:06:37 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:14.570 01:06:37 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.570 01:06:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.831 ************************************ 00:06:14.831 START TEST rpc_plugins 00:06:14.831 ************************************ 00:06:14.831 01:06:37 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:06:14.831 01:06:37 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:14.831 01:06:37 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.831 01:06:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:14.831 01:06:37 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.831 01:06:37 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:14.831 01:06:37 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:14.831 01:06:37 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.831 01:06:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:14.831 01:06:37 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.831 01:06:37 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:14.831 { 00:06:14.831 "name": "Malloc1", 00:06:14.831 "aliases": [ 00:06:14.831 "f42740e7-1302-4616-af52-a91cf1d9a8b6" 00:06:14.831 ], 00:06:14.831 "product_name": "Malloc disk", 00:06:14.831 "block_size": 4096, 00:06:14.831 "num_blocks": 256, 00:06:14.831 "uuid": "f42740e7-1302-4616-af52-a91cf1d9a8b6", 00:06:14.831 "assigned_rate_limits": { 00:06:14.831 "rw_ios_per_sec": 0, 00:06:14.831 "rw_mbytes_per_sec": 0, 00:06:14.831 "r_mbytes_per_sec": 0, 00:06:14.831 "w_mbytes_per_sec": 0 00:06:14.831 }, 00:06:14.831 "claimed": false, 00:06:14.831 "zoned": false, 00:06:14.831 "supported_io_types": { 00:06:14.831 "read": true, 00:06:14.831 "write": true, 00:06:14.831 "unmap": true, 00:06:14.831 "flush": true, 00:06:14.831 "reset": true, 00:06:14.831 "nvme_admin": false, 00:06:14.831 "nvme_io": false, 00:06:14.831 "nvme_io_md": false, 00:06:14.831 "write_zeroes": true, 00:06:14.831 "zcopy": true, 00:06:14.831 "get_zone_info": false, 00:06:14.831 "zone_management": false, 00:06:14.831 "zone_append": false, 00:06:14.831 "compare": false, 00:06:14.831 "compare_and_write": false, 00:06:14.831 "abort": true, 00:06:14.831 "seek_hole": false, 00:06:14.831 "seek_data": false, 00:06:14.831 "copy": true, 00:06:14.831 "nvme_iov_md": false 00:06:14.831 }, 00:06:14.831 "memory_domains": [ 00:06:14.831 { 00:06:14.831 "dma_device_id": "system", 00:06:14.831 "dma_device_type": 1 00:06:14.831 }, 00:06:14.831 { 00:06:14.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:14.831 "dma_device_type": 2 00:06:14.831 } 00:06:14.831 ], 00:06:14.831 "driver_specific": {} 00:06:14.831 } 00:06:14.831 ]' 00:06:14.831 01:06:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:14.831 01:06:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:14.831 01:06:37 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:14.831 01:06:37 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.831 01:06:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:14.831 01:06:37 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.831 01:06:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:14.831 01:06:37 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.831 01:06:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:14.831 01:06:37 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.831 01:06:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:14.831 01:06:37 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:14.831 01:06:37 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:14.831 00:06:14.831 real 0m0.127s 00:06:14.831 user 0m0.078s 00:06:14.831 sys 0m0.012s 00:06:14.831 01:06:37 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.831 01:06:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:14.831 ************************************ 00:06:14.831 END TEST rpc_plugins 00:06:14.831 ************************************ 00:06:14.831 01:06:37 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:14.831 01:06:37 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:14.831 01:06:37 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:14.831 01:06:37 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.831 01:06:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.831 ************************************ 00:06:14.831 START TEST rpc_trace_cmd_test 00:06:14.831 ************************************ 00:06:14.831 01:06:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:06:14.831 01:06:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:14.831 01:06:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:14.831 01:06:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.831 01:06:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:14.831 01:06:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.831 01:06:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:14.831 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid716814", 00:06:14.831 "tpoint_group_mask": "0x8", 00:06:14.831 "iscsi_conn": { 00:06:14.831 "mask": "0x2", 00:06:14.831 "tpoint_mask": "0x0" 00:06:14.831 }, 00:06:14.831 "scsi": { 00:06:14.831 "mask": "0x4", 00:06:14.831 "tpoint_mask": "0x0" 00:06:14.831 }, 00:06:14.831 "bdev": { 00:06:14.831 "mask": "0x8", 00:06:14.831 "tpoint_mask": "0xffffffffffffffff" 00:06:14.831 }, 00:06:14.831 "nvmf_rdma": { 00:06:14.831 "mask": "0x10", 00:06:14.831 "tpoint_mask": "0x0" 00:06:14.831 }, 00:06:14.831 "nvmf_tcp": { 00:06:14.831 "mask": "0x20", 00:06:14.831 "tpoint_mask": "0x0" 00:06:14.831 }, 00:06:14.831 "ftl": { 00:06:14.831 "mask": "0x40", 00:06:14.831 "tpoint_mask": "0x0" 00:06:14.831 }, 00:06:14.831 "blobfs": { 00:06:14.831 "mask": "0x80", 00:06:14.831 "tpoint_mask": "0x0" 00:06:14.831 }, 00:06:14.831 "dsa": { 00:06:14.831 "mask": "0x200", 00:06:14.831 "tpoint_mask": "0x0" 00:06:14.831 }, 00:06:14.831 "thread": { 00:06:14.831 "mask": "0x400", 00:06:14.831 "tpoint_mask": "0x0" 00:06:14.831 }, 00:06:14.831 "nvme_pcie": { 00:06:14.831 "mask": "0x800", 00:06:14.831 "tpoint_mask": "0x0" 00:06:14.831 }, 00:06:14.831 "iaa": { 00:06:14.831 "mask": "0x1000", 00:06:14.831 "tpoint_mask": "0x0" 00:06:14.831 }, 00:06:14.831 "nvme_tcp": { 00:06:14.831 "mask": "0x2000", 00:06:14.831 "tpoint_mask": "0x0" 00:06:14.831 }, 00:06:14.831 "bdev_nvme": { 00:06:14.831 "mask": "0x4000", 00:06:14.831 "tpoint_mask": "0x0" 00:06:14.831 }, 00:06:14.831 "sock": { 00:06:14.831 "mask": "0x8000", 00:06:14.831 "tpoint_mask": "0x0" 00:06:14.831 } 00:06:14.831 }' 00:06:14.831 01:06:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:15.092 01:06:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:15.092 01:06:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:15.092 01:06:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:15.092 01:06:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:15.092 01:06:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:15.092 01:06:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:15.092 01:06:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:15.092 01:06:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:15.092 01:06:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:15.092 00:06:15.092 real 0m0.216s 00:06:15.092 user 0m0.186s 00:06:15.092 sys 0m0.023s 00:06:15.092 01:06:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.092 01:06:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:15.092 ************************************ 00:06:15.092 END TEST rpc_trace_cmd_test 00:06:15.092 ************************************ 00:06:15.092 01:06:37 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:15.092 01:06:37 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:15.092 01:06:37 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:15.092 01:06:37 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:15.092 01:06:37 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:15.092 01:06:37 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.092 01:06:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.092 ************************************ 00:06:15.092 START TEST rpc_daemon_integrity 00:06:15.092 ************************************ 00:06:15.092 01:06:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:06:15.092 01:06:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:15.092 01:06:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.092 01:06:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.092 01:06:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.092 01:06:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:15.092 01:06:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:15.352 01:06:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:15.352 01:06:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:15.352 01:06:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.352 01:06:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.352 01:06:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.352 01:06:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:15.352 01:06:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:15.352 01:06:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.352 01:06:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.352 01:06:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.352 01:06:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:15.352 { 00:06:15.352 "name": "Malloc2", 00:06:15.352 "aliases": [ 00:06:15.352 "01c2e931-8239-4b53-ada9-845c328e328a" 00:06:15.352 ], 00:06:15.352 "product_name": "Malloc disk", 00:06:15.352 "block_size": 512, 00:06:15.352 "num_blocks": 16384, 00:06:15.352 "uuid": "01c2e931-8239-4b53-ada9-845c328e328a", 00:06:15.352 "assigned_rate_limits": { 00:06:15.352 "rw_ios_per_sec": 0, 00:06:15.352 "rw_mbytes_per_sec": 0, 00:06:15.352 "r_mbytes_per_sec": 0, 00:06:15.352 "w_mbytes_per_sec": 0 00:06:15.352 }, 00:06:15.352 "claimed": false, 00:06:15.352 "zoned": false, 00:06:15.352 "supported_io_types": { 00:06:15.352 "read": true, 00:06:15.352 "write": true, 00:06:15.352 "unmap": true, 00:06:15.352 "flush": true, 00:06:15.352 "reset": true, 00:06:15.352 "nvme_admin": false, 00:06:15.352 "nvme_io": false, 00:06:15.352 "nvme_io_md": false, 00:06:15.352 "write_zeroes": true, 00:06:15.352 "zcopy": true, 00:06:15.352 "get_zone_info": false, 00:06:15.352 "zone_management": false, 00:06:15.352 "zone_append": false, 00:06:15.353 "compare": false, 00:06:15.353 "compare_and_write": false, 00:06:15.353 "abort": true, 00:06:15.353 "seek_hole": false, 00:06:15.353 "seek_data": false, 00:06:15.353 "copy": true, 00:06:15.353 "nvme_iov_md": false 00:06:15.353 }, 00:06:15.353 "memory_domains": [ 00:06:15.353 { 00:06:15.353 "dma_device_id": "system", 00:06:15.353 "dma_device_type": 1 00:06:15.353 }, 00:06:15.353 { 00:06:15.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:15.353 "dma_device_type": 2 00:06:15.353 } 00:06:15.353 ], 00:06:15.353 "driver_specific": {} 00:06:15.353 } 00:06:15.353 ]' 00:06:15.353 01:06:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:15.353 01:06:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:15.353 01:06:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:15.353 01:06:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.353 01:06:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.353 [2024-07-25 01:06:37.689796] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:15.353 [2024-07-25 01:06:37.689822] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:15.353 [2024-07-25 01:06:37.689834] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2459ac0 00:06:15.353 [2024-07-25 01:06:37.689840] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:15.353 [2024-07-25 01:06:37.690785] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:15.353 [2024-07-25 01:06:37.690805] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:15.353 Passthru0 00:06:15.353 01:06:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.353 01:06:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:15.353 01:06:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.353 01:06:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.353 01:06:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.353 01:06:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:15.353 { 00:06:15.353 "name": "Malloc2", 00:06:15.353 "aliases": [ 00:06:15.353 "01c2e931-8239-4b53-ada9-845c328e328a" 00:06:15.353 ], 00:06:15.353 "product_name": "Malloc disk", 00:06:15.353 "block_size": 512, 00:06:15.353 "num_blocks": 16384, 00:06:15.353 "uuid": "01c2e931-8239-4b53-ada9-845c328e328a", 00:06:15.353 "assigned_rate_limits": { 00:06:15.353 "rw_ios_per_sec": 0, 00:06:15.353 "rw_mbytes_per_sec": 0, 00:06:15.353 "r_mbytes_per_sec": 0, 00:06:15.353 "w_mbytes_per_sec": 0 00:06:15.353 }, 00:06:15.353 "claimed": true, 00:06:15.353 "claim_type": "exclusive_write", 00:06:15.353 "zoned": false, 00:06:15.353 "supported_io_types": { 00:06:15.353 "read": true, 00:06:15.353 "write": true, 00:06:15.353 "unmap": true, 00:06:15.353 "flush": true, 00:06:15.353 "reset": true, 00:06:15.353 "nvme_admin": false, 00:06:15.353 "nvme_io": false, 00:06:15.353 "nvme_io_md": false, 00:06:15.353 "write_zeroes": true, 00:06:15.353 "zcopy": true, 00:06:15.353 "get_zone_info": false, 00:06:15.353 "zone_management": false, 00:06:15.353 "zone_append": false, 00:06:15.353 "compare": false, 00:06:15.353 "compare_and_write": false, 00:06:15.353 "abort": true, 00:06:15.353 "seek_hole": false, 00:06:15.353 "seek_data": false, 00:06:15.353 "copy": true, 00:06:15.353 "nvme_iov_md": false 00:06:15.353 }, 00:06:15.353 "memory_domains": [ 00:06:15.353 { 00:06:15.353 "dma_device_id": "system", 00:06:15.353 "dma_device_type": 1 00:06:15.353 }, 00:06:15.353 { 00:06:15.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:15.353 "dma_device_type": 2 00:06:15.353 } 00:06:15.353 ], 00:06:15.353 "driver_specific": {} 00:06:15.353 }, 00:06:15.353 { 00:06:15.353 "name": "Passthru0", 00:06:15.353 "aliases": [ 00:06:15.353 "6f46d7cc-4d6c-54b3-9b7b-67698c8a35f2" 00:06:15.353 ], 00:06:15.353 "product_name": "passthru", 00:06:15.353 "block_size": 512, 00:06:15.353 "num_blocks": 16384, 00:06:15.353 "uuid": "6f46d7cc-4d6c-54b3-9b7b-67698c8a35f2", 00:06:15.353 "assigned_rate_limits": { 00:06:15.353 "rw_ios_per_sec": 0, 00:06:15.353 "rw_mbytes_per_sec": 0, 00:06:15.353 "r_mbytes_per_sec": 0, 00:06:15.353 "w_mbytes_per_sec": 0 00:06:15.353 }, 00:06:15.353 "claimed": false, 00:06:15.353 "zoned": false, 00:06:15.353 "supported_io_types": { 00:06:15.353 "read": true, 00:06:15.353 "write": true, 00:06:15.353 "unmap": true, 00:06:15.353 "flush": true, 00:06:15.353 "reset": true, 00:06:15.353 "nvme_admin": false, 00:06:15.353 "nvme_io": false, 00:06:15.353 "nvme_io_md": false, 00:06:15.353 "write_zeroes": true, 00:06:15.353 "zcopy": true, 00:06:15.353 "get_zone_info": false, 00:06:15.353 "zone_management": false, 00:06:15.353 "zone_append": false, 00:06:15.353 "compare": false, 00:06:15.353 "compare_and_write": false, 00:06:15.353 "abort": true, 00:06:15.353 "seek_hole": false, 00:06:15.353 "seek_data": false, 00:06:15.353 "copy": true, 00:06:15.353 "nvme_iov_md": false 00:06:15.353 }, 00:06:15.353 "memory_domains": [ 00:06:15.353 { 00:06:15.353 "dma_device_id": "system", 00:06:15.353 "dma_device_type": 1 00:06:15.353 }, 00:06:15.353 { 00:06:15.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:15.353 "dma_device_type": 2 00:06:15.353 } 00:06:15.353 ], 00:06:15.353 "driver_specific": { 00:06:15.353 "passthru": { 00:06:15.353 "name": "Passthru0", 00:06:15.353 "base_bdev_name": "Malloc2" 00:06:15.353 } 00:06:15.353 } 00:06:15.353 } 00:06:15.353 ]' 00:06:15.353 01:06:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:15.353 01:06:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:15.353 01:06:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:15.353 01:06:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.353 01:06:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.353 01:06:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.353 01:06:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:15.353 01:06:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.353 01:06:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.353 01:06:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.353 01:06:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:15.353 01:06:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.353 01:06:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.353 01:06:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.353 01:06:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:15.353 01:06:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:15.353 01:06:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:15.353 00:06:15.353 real 0m0.245s 00:06:15.353 user 0m0.156s 00:06:15.353 sys 0m0.031s 00:06:15.353 01:06:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.353 01:06:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:15.353 ************************************ 00:06:15.353 END TEST rpc_daemon_integrity 00:06:15.353 ************************************ 00:06:15.353 01:06:37 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:15.353 01:06:37 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:15.353 01:06:37 rpc -- rpc/rpc.sh@84 -- # killprocess 716814 00:06:15.353 01:06:37 rpc -- common/autotest_common.sh@948 -- # '[' -z 716814 ']' 00:06:15.353 01:06:37 rpc -- common/autotest_common.sh@952 -- # kill -0 716814 00:06:15.353 01:06:37 rpc -- common/autotest_common.sh@953 -- # uname 00:06:15.353 01:06:37 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:15.353 01:06:37 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 716814 00:06:15.614 01:06:37 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:15.614 01:06:37 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:15.614 01:06:37 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 716814' 00:06:15.614 killing process with pid 716814 00:06:15.614 01:06:37 rpc -- common/autotest_common.sh@967 -- # kill 716814 00:06:15.614 01:06:37 rpc -- common/autotest_common.sh@972 -- # wait 716814 00:06:15.873 00:06:15.873 real 0m2.352s 00:06:15.873 user 0m3.056s 00:06:15.873 sys 0m0.600s 00:06:15.873 01:06:38 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.873 01:06:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.873 ************************************ 00:06:15.873 END TEST rpc 00:06:15.873 ************************************ 00:06:15.873 01:06:38 -- common/autotest_common.sh@1142 -- # return 0 00:06:15.873 01:06:38 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:15.873 01:06:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:15.873 01:06:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.873 01:06:38 -- common/autotest_common.sh@10 -- # set +x 00:06:15.873 ************************************ 00:06:15.873 START TEST skip_rpc 00:06:15.873 ************************************ 00:06:15.873 01:06:38 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:15.873 * Looking for test storage... 00:06:15.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:15.873 01:06:38 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:15.873 01:06:38 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:15.873 01:06:38 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:15.874 01:06:38 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:15.874 01:06:38 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.874 01:06:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.133 ************************************ 00:06:16.133 START TEST skip_rpc 00:06:16.133 ************************************ 00:06:16.133 01:06:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:06:16.133 01:06:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=717443 00:06:16.133 01:06:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:16.133 01:06:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:16.133 01:06:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:16.133 [2024-07-25 01:06:38.432755] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:06:16.133 [2024-07-25 01:06:38.432793] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid717443 ] 00:06:16.133 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.133 [2024-07-25 01:06:38.485220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.133 [2024-07-25 01:06:38.557876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.451 01:06:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:21.451 01:06:43 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:21.451 01:06:43 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:21.451 01:06:43 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:21.451 01:06:43 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:21.451 01:06:43 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:21.451 01:06:43 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:21.451 01:06:43 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:06:21.451 01:06:43 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.451 01:06:43 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.451 01:06:43 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:21.451 01:06:43 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:21.451 01:06:43 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:21.451 01:06:43 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:21.451 01:06:43 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:21.451 01:06:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:21.451 01:06:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 717443 00:06:21.451 01:06:43 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 717443 ']' 00:06:21.451 01:06:43 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 717443 00:06:21.451 01:06:43 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:06:21.451 01:06:43 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:21.451 01:06:43 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 717443 00:06:21.451 01:06:43 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:21.451 01:06:43 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:21.452 01:06:43 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 717443' 00:06:21.452 killing process with pid 717443 00:06:21.452 01:06:43 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 717443 00:06:21.452 01:06:43 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 717443 00:06:21.452 00:06:21.452 real 0m5.367s 00:06:21.452 user 0m5.137s 00:06:21.452 sys 0m0.256s 00:06:21.452 01:06:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.452 01:06:43 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.452 ************************************ 00:06:21.452 END TEST skip_rpc 00:06:21.452 ************************************ 00:06:21.452 01:06:43 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:21.452 01:06:43 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:21.452 01:06:43 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:21.452 01:06:43 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.452 01:06:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.452 ************************************ 00:06:21.452 START TEST skip_rpc_with_json 00:06:21.452 ************************************ 00:06:21.452 01:06:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:06:21.452 01:06:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:21.452 01:06:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:21.452 01:06:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=718397 00:06:21.452 01:06:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:21.452 01:06:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 718397 00:06:21.452 01:06:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 718397 ']' 00:06:21.452 01:06:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.452 01:06:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.452 01:06:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.452 01:06:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.452 01:06:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:21.452 [2024-07-25 01:06:43.846961] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:06:21.452 [2024-07-25 01:06:43.846998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid718397 ] 00:06:21.452 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.452 [2024-07-25 01:06:43.900071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.712 [2024-07-25 01:06:43.980184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.282 01:06:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.282 01:06:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:06:22.282 01:06:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:22.282 01:06:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.282 01:06:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:22.282 [2024-07-25 01:06:44.659519] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:22.282 request: 00:06:22.282 { 00:06:22.282 "trtype": "tcp", 00:06:22.282 "method": "nvmf_get_transports", 00:06:22.282 "req_id": 1 00:06:22.282 } 00:06:22.282 Got JSON-RPC error response 00:06:22.282 response: 00:06:22.282 { 00:06:22.282 "code": -19, 00:06:22.282 "message": "No such device" 00:06:22.282 } 00:06:22.282 01:06:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:22.282 01:06:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:22.282 01:06:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.282 01:06:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:22.282 [2024-07-25 01:06:44.671626] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:22.282 01:06:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.282 01:06:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:22.282 01:06:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.282 01:06:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:22.542 01:06:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.542 01:06:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:22.542 { 00:06:22.542 "subsystems": [ 00:06:22.542 { 00:06:22.542 "subsystem": "vfio_user_target", 00:06:22.542 "config": null 00:06:22.542 }, 00:06:22.542 { 00:06:22.542 "subsystem": "keyring", 00:06:22.542 "config": [] 00:06:22.542 }, 00:06:22.542 { 00:06:22.542 "subsystem": "iobuf", 00:06:22.542 "config": [ 00:06:22.542 { 00:06:22.542 "method": "iobuf_set_options", 00:06:22.542 "params": { 00:06:22.542 "small_pool_count": 8192, 00:06:22.542 "large_pool_count": 1024, 00:06:22.542 "small_bufsize": 8192, 00:06:22.542 "large_bufsize": 135168 00:06:22.542 } 00:06:22.542 } 00:06:22.542 ] 00:06:22.542 }, 00:06:22.542 { 00:06:22.542 "subsystem": "sock", 00:06:22.542 "config": [ 00:06:22.542 { 00:06:22.542 "method": "sock_set_default_impl", 00:06:22.542 "params": { 00:06:22.542 "impl_name": "posix" 00:06:22.542 } 00:06:22.542 }, 00:06:22.542 { 00:06:22.542 "method": "sock_impl_set_options", 00:06:22.542 "params": { 00:06:22.542 "impl_name": "ssl", 00:06:22.542 "recv_buf_size": 4096, 00:06:22.542 "send_buf_size": 4096, 00:06:22.542 "enable_recv_pipe": true, 00:06:22.542 "enable_quickack": false, 00:06:22.542 "enable_placement_id": 0, 00:06:22.542 "enable_zerocopy_send_server": true, 00:06:22.542 "enable_zerocopy_send_client": false, 00:06:22.542 "zerocopy_threshold": 0, 00:06:22.542 "tls_version": 0, 00:06:22.542 "enable_ktls": false 00:06:22.542 } 00:06:22.542 }, 00:06:22.542 { 00:06:22.542 "method": "sock_impl_set_options", 00:06:22.542 "params": { 00:06:22.542 "impl_name": "posix", 00:06:22.542 "recv_buf_size": 2097152, 00:06:22.542 "send_buf_size": 2097152, 00:06:22.542 "enable_recv_pipe": true, 00:06:22.542 "enable_quickack": false, 00:06:22.542 "enable_placement_id": 0, 00:06:22.542 "enable_zerocopy_send_server": true, 00:06:22.542 "enable_zerocopy_send_client": false, 00:06:22.542 "zerocopy_threshold": 0, 00:06:22.542 "tls_version": 0, 00:06:22.542 "enable_ktls": false 00:06:22.542 } 00:06:22.542 } 00:06:22.542 ] 00:06:22.542 }, 00:06:22.542 { 00:06:22.542 "subsystem": "vmd", 00:06:22.542 "config": [] 00:06:22.542 }, 00:06:22.542 { 00:06:22.542 "subsystem": "accel", 00:06:22.542 "config": [ 00:06:22.542 { 00:06:22.542 "method": "accel_set_options", 00:06:22.542 "params": { 00:06:22.542 "small_cache_size": 128, 00:06:22.542 "large_cache_size": 16, 00:06:22.542 "task_count": 2048, 00:06:22.542 "sequence_count": 2048, 00:06:22.542 "buf_count": 2048 00:06:22.542 } 00:06:22.542 } 00:06:22.542 ] 00:06:22.542 }, 00:06:22.542 { 00:06:22.542 "subsystem": "bdev", 00:06:22.542 "config": [ 00:06:22.542 { 00:06:22.542 "method": "bdev_set_options", 00:06:22.542 "params": { 00:06:22.542 "bdev_io_pool_size": 65535, 00:06:22.542 "bdev_io_cache_size": 256, 00:06:22.542 "bdev_auto_examine": true, 00:06:22.542 "iobuf_small_cache_size": 128, 00:06:22.542 "iobuf_large_cache_size": 16 00:06:22.542 } 00:06:22.542 }, 00:06:22.542 { 00:06:22.542 "method": "bdev_raid_set_options", 00:06:22.542 "params": { 00:06:22.542 "process_window_size_kb": 1024 00:06:22.542 } 00:06:22.542 }, 00:06:22.542 { 00:06:22.542 "method": "bdev_iscsi_set_options", 00:06:22.542 "params": { 00:06:22.542 "timeout_sec": 30 00:06:22.542 } 00:06:22.542 }, 00:06:22.542 { 00:06:22.542 "method": "bdev_nvme_set_options", 00:06:22.542 "params": { 00:06:22.542 "action_on_timeout": "none", 00:06:22.542 "timeout_us": 0, 00:06:22.542 "timeout_admin_us": 0, 00:06:22.542 "keep_alive_timeout_ms": 10000, 00:06:22.542 "arbitration_burst": 0, 00:06:22.542 "low_priority_weight": 0, 00:06:22.542 "medium_priority_weight": 0, 00:06:22.542 "high_priority_weight": 0, 00:06:22.542 "nvme_adminq_poll_period_us": 10000, 00:06:22.542 "nvme_ioq_poll_period_us": 0, 00:06:22.542 "io_queue_requests": 0, 00:06:22.542 "delay_cmd_submit": true, 00:06:22.542 "transport_retry_count": 4, 00:06:22.542 "bdev_retry_count": 3, 00:06:22.542 "transport_ack_timeout": 0, 00:06:22.542 "ctrlr_loss_timeout_sec": 0, 00:06:22.542 "reconnect_delay_sec": 0, 00:06:22.542 "fast_io_fail_timeout_sec": 0, 00:06:22.542 "disable_auto_failback": false, 00:06:22.542 "generate_uuids": false, 00:06:22.542 "transport_tos": 0, 00:06:22.542 "nvme_error_stat": false, 00:06:22.542 "rdma_srq_size": 0, 00:06:22.542 "io_path_stat": false, 00:06:22.542 "allow_accel_sequence": false, 00:06:22.542 "rdma_max_cq_size": 0, 00:06:22.542 "rdma_cm_event_timeout_ms": 0, 00:06:22.542 "dhchap_digests": [ 00:06:22.542 "sha256", 00:06:22.542 "sha384", 00:06:22.542 "sha512" 00:06:22.542 ], 00:06:22.542 "dhchap_dhgroups": [ 00:06:22.542 "null", 00:06:22.542 "ffdhe2048", 00:06:22.542 "ffdhe3072", 00:06:22.542 "ffdhe4096", 00:06:22.542 "ffdhe6144", 00:06:22.542 "ffdhe8192" 00:06:22.542 ] 00:06:22.542 } 00:06:22.542 }, 00:06:22.542 { 00:06:22.542 "method": "bdev_nvme_set_hotplug", 00:06:22.543 "params": { 00:06:22.543 "period_us": 100000, 00:06:22.543 "enable": false 00:06:22.543 } 00:06:22.543 }, 00:06:22.543 { 00:06:22.543 "method": "bdev_wait_for_examine" 00:06:22.543 } 00:06:22.543 ] 00:06:22.543 }, 00:06:22.543 { 00:06:22.543 "subsystem": "scsi", 00:06:22.543 "config": null 00:06:22.543 }, 00:06:22.543 { 00:06:22.543 "subsystem": "scheduler", 00:06:22.543 "config": [ 00:06:22.543 { 00:06:22.543 "method": "framework_set_scheduler", 00:06:22.543 "params": { 00:06:22.543 "name": "static" 00:06:22.543 } 00:06:22.543 } 00:06:22.543 ] 00:06:22.543 }, 00:06:22.543 { 00:06:22.543 "subsystem": "vhost_scsi", 00:06:22.543 "config": [] 00:06:22.543 }, 00:06:22.543 { 00:06:22.543 "subsystem": "vhost_blk", 00:06:22.543 "config": [] 00:06:22.543 }, 00:06:22.543 { 00:06:22.543 "subsystem": "ublk", 00:06:22.543 "config": [] 00:06:22.543 }, 00:06:22.543 { 00:06:22.543 "subsystem": "nbd", 00:06:22.543 "config": [] 00:06:22.543 }, 00:06:22.543 { 00:06:22.543 "subsystem": "nvmf", 00:06:22.543 "config": [ 00:06:22.543 { 00:06:22.543 "method": "nvmf_set_config", 00:06:22.543 "params": { 00:06:22.543 "discovery_filter": "match_any", 00:06:22.543 "admin_cmd_passthru": { 00:06:22.543 "identify_ctrlr": false 00:06:22.543 } 00:06:22.543 } 00:06:22.543 }, 00:06:22.543 { 00:06:22.543 "method": "nvmf_set_max_subsystems", 00:06:22.543 "params": { 00:06:22.543 "max_subsystems": 1024 00:06:22.543 } 00:06:22.543 }, 00:06:22.543 { 00:06:22.543 "method": "nvmf_set_crdt", 00:06:22.543 "params": { 00:06:22.543 "crdt1": 0, 00:06:22.543 "crdt2": 0, 00:06:22.543 "crdt3": 0 00:06:22.543 } 00:06:22.543 }, 00:06:22.543 { 00:06:22.543 "method": "nvmf_create_transport", 00:06:22.543 "params": { 00:06:22.543 "trtype": "TCP", 00:06:22.543 "max_queue_depth": 128, 00:06:22.543 "max_io_qpairs_per_ctrlr": 127, 00:06:22.543 "in_capsule_data_size": 4096, 00:06:22.543 "max_io_size": 131072, 00:06:22.543 "io_unit_size": 131072, 00:06:22.543 "max_aq_depth": 128, 00:06:22.543 "num_shared_buffers": 511, 00:06:22.543 "buf_cache_size": 4294967295, 00:06:22.543 "dif_insert_or_strip": false, 00:06:22.543 "zcopy": false, 00:06:22.543 "c2h_success": true, 00:06:22.543 "sock_priority": 0, 00:06:22.543 "abort_timeout_sec": 1, 00:06:22.543 "ack_timeout": 0, 00:06:22.543 "data_wr_pool_size": 0 00:06:22.543 } 00:06:22.543 } 00:06:22.543 ] 00:06:22.543 }, 00:06:22.543 { 00:06:22.543 "subsystem": "iscsi", 00:06:22.543 "config": [ 00:06:22.543 { 00:06:22.543 "method": "iscsi_set_options", 00:06:22.543 "params": { 00:06:22.543 "node_base": "iqn.2016-06.io.spdk", 00:06:22.543 "max_sessions": 128, 00:06:22.543 "max_connections_per_session": 2, 00:06:22.543 "max_queue_depth": 64, 00:06:22.543 "default_time2wait": 2, 00:06:22.543 "default_time2retain": 20, 00:06:22.543 "first_burst_length": 8192, 00:06:22.543 "immediate_data": true, 00:06:22.543 "allow_duplicated_isid": false, 00:06:22.543 "error_recovery_level": 0, 00:06:22.543 "nop_timeout": 60, 00:06:22.543 "nop_in_interval": 30, 00:06:22.543 "disable_chap": false, 00:06:22.543 "require_chap": false, 00:06:22.543 "mutual_chap": false, 00:06:22.543 "chap_group": 0, 00:06:22.543 "max_large_datain_per_connection": 64, 00:06:22.543 "max_r2t_per_connection": 4, 00:06:22.543 "pdu_pool_size": 36864, 00:06:22.543 "immediate_data_pool_size": 16384, 00:06:22.543 "data_out_pool_size": 2048 00:06:22.543 } 00:06:22.543 } 00:06:22.543 ] 00:06:22.543 } 00:06:22.543 ] 00:06:22.543 } 00:06:22.543 01:06:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:22.543 01:06:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 718397 00:06:22.543 01:06:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 718397 ']' 00:06:22.543 01:06:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 718397 00:06:22.543 01:06:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:22.543 01:06:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:22.543 01:06:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 718397 00:06:22.543 01:06:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:22.543 01:06:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:22.543 01:06:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 718397' 00:06:22.543 killing process with pid 718397 00:06:22.543 01:06:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 718397 00:06:22.543 01:06:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 718397 00:06:22.803 01:06:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=718636 00:06:22.803 01:06:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:22.803 01:06:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:28.080 01:06:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 718636 00:06:28.080 01:06:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 718636 ']' 00:06:28.080 01:06:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 718636 00:06:28.080 01:06:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:28.080 01:06:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:28.080 01:06:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 718636 00:06:28.080 01:06:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:28.080 01:06:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:28.080 01:06:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 718636' 00:06:28.080 killing process with pid 718636 00:06:28.080 01:06:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 718636 00:06:28.080 01:06:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 718636 00:06:28.080 01:06:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:28.080 01:06:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:28.080 00:06:28.080 real 0m6.733s 00:06:28.080 user 0m6.576s 00:06:28.080 sys 0m0.567s 00:06:28.080 01:06:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.080 01:06:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:28.080 ************************************ 00:06:28.080 END TEST skip_rpc_with_json 00:06:28.080 ************************************ 00:06:28.080 01:06:50 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:28.341 01:06:50 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:28.341 01:06:50 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:28.341 01:06:50 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.341 01:06:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.341 ************************************ 00:06:28.341 START TEST skip_rpc_with_delay 00:06:28.341 ************************************ 00:06:28.341 01:06:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:06:28.341 01:06:50 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:28.341 01:06:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:28.341 01:06:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:28.341 01:06:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:28.341 01:06:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:28.341 01:06:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:28.341 01:06:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:28.341 01:06:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:28.341 01:06:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:28.341 01:06:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:28.341 01:06:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:28.341 01:06:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:28.341 [2024-07-25 01:06:50.659602] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:28.341 [2024-07-25 01:06:50.659658] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:28.341 01:06:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:28.341 01:06:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:28.341 01:06:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:28.341 01:06:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:28.341 00:06:28.341 real 0m0.064s 00:06:28.341 user 0m0.046s 00:06:28.341 sys 0m0.017s 00:06:28.341 01:06:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.341 01:06:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:28.341 ************************************ 00:06:28.341 END TEST skip_rpc_with_delay 00:06:28.341 ************************************ 00:06:28.341 01:06:50 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:28.341 01:06:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:28.341 01:06:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:28.341 01:06:50 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:28.341 01:06:50 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:28.341 01:06:50 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.341 01:06:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.341 ************************************ 00:06:28.341 START TEST exit_on_failed_rpc_init 00:06:28.341 ************************************ 00:06:28.341 01:06:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:06:28.341 01:06:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=719613 00:06:28.341 01:06:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 719613 00:06:28.341 01:06:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:28.341 01:06:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 719613 ']' 00:06:28.341 01:06:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.341 01:06:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.341 01:06:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.341 01:06:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.341 01:06:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:28.341 [2024-07-25 01:06:50.780062] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:06:28.341 [2024-07-25 01:06:50.780102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid719613 ] 00:06:28.341 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.341 [2024-07-25 01:06:50.832327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.602 [2024-07-25 01:06:50.912322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.172 01:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.172 01:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:06:29.172 01:06:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:29.172 01:06:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:29.172 01:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:29.172 01:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:29.172 01:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:29.172 01:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:29.172 01:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:29.172 01:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:29.172 01:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:29.172 01:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:29.172 01:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:29.172 01:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:29.172 01:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:29.172 [2024-07-25 01:06:51.612573] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:06:29.172 [2024-07-25 01:06:51.612617] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid719830 ] 00:06:29.172 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.172 [2024-07-25 01:06:51.663934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.432 [2024-07-25 01:06:51.737426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.432 [2024-07-25 01:06:51.737488] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:29.432 [2024-07-25 01:06:51.737498] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:29.432 [2024-07-25 01:06:51.737503] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:29.432 01:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:29.432 01:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:29.432 01:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:29.432 01:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:29.432 01:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:29.432 01:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:29.432 01:06:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:29.432 01:06:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 719613 00:06:29.432 01:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 719613 ']' 00:06:29.432 01:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 719613 00:06:29.432 01:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:06:29.432 01:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:29.432 01:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 719613 00:06:29.432 01:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:29.432 01:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:29.432 01:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 719613' 00:06:29.432 killing process with pid 719613 00:06:29.432 01:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 719613 00:06:29.432 01:06:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 719613 00:06:29.692 00:06:29.692 real 0m1.433s 00:06:29.692 user 0m1.670s 00:06:29.692 sys 0m0.367s 00:06:29.692 01:06:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.692 01:06:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:29.692 ************************************ 00:06:29.692 END TEST exit_on_failed_rpc_init 00:06:29.692 ************************************ 00:06:29.952 01:06:52 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:29.952 01:06:52 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:29.952 00:06:29.952 real 0m13.941s 00:06:29.952 user 0m13.565s 00:06:29.952 sys 0m1.441s 00:06:29.952 01:06:52 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.952 01:06:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.952 ************************************ 00:06:29.952 END TEST skip_rpc 00:06:29.952 ************************************ 00:06:29.952 01:06:52 -- common/autotest_common.sh@1142 -- # return 0 00:06:29.952 01:06:52 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:29.952 01:06:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:29.952 01:06:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.952 01:06:52 -- common/autotest_common.sh@10 -- # set +x 00:06:29.952 ************************************ 00:06:29.952 START TEST rpc_client 00:06:29.952 ************************************ 00:06:29.952 01:06:52 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:29.952 * Looking for test storage... 00:06:29.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:29.952 01:06:52 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:29.952 OK 00:06:29.952 01:06:52 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:29.952 00:06:29.952 real 0m0.101s 00:06:29.952 user 0m0.046s 00:06:29.952 sys 0m0.063s 00:06:29.952 01:06:52 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.952 01:06:52 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:29.952 ************************************ 00:06:29.952 END TEST rpc_client 00:06:29.952 ************************************ 00:06:29.952 01:06:52 -- common/autotest_common.sh@1142 -- # return 0 00:06:29.952 01:06:52 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:29.952 01:06:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:29.952 01:06:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.952 01:06:52 -- common/autotest_common.sh@10 -- # set +x 00:06:29.952 ************************************ 00:06:29.952 START TEST json_config 00:06:29.952 ************************************ 00:06:29.952 01:06:52 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:30.212 01:06:52 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:30.212 01:06:52 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:30.212 01:06:52 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:30.212 01:06:52 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:30.212 01:06:52 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:30.212 01:06:52 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:30.212 01:06:52 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:30.212 01:06:52 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:30.212 01:06:52 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:30.212 01:06:52 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:30.212 01:06:52 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:30.212 01:06:52 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:30.212 01:06:52 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:30.212 01:06:52 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:30.212 01:06:52 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:30.212 01:06:52 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:30.212 01:06:52 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:30.212 01:06:52 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:30.212 01:06:52 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:30.212 01:06:52 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:30.212 01:06:52 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:30.212 01:06:52 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:30.212 01:06:52 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.212 01:06:52 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.212 01:06:52 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.212 01:06:52 json_config -- paths/export.sh@5 -- # export PATH 00:06:30.212 01:06:52 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.212 01:06:52 json_config -- nvmf/common.sh@47 -- # : 0 00:06:30.212 01:06:52 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:30.212 01:06:52 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:30.212 01:06:52 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:30.212 01:06:52 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:30.212 01:06:52 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:30.212 01:06:52 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:30.212 01:06:52 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:30.212 01:06:52 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:30.212 01:06:52 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:30.212 01:06:52 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:30.212 01:06:52 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:30.212 01:06:52 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:30.212 01:06:52 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:30.212 01:06:52 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:30.212 01:06:52 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:30.212 01:06:52 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:30.212 01:06:52 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:30.212 01:06:52 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:30.212 01:06:52 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:30.212 01:06:52 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:30.212 01:06:52 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:30.212 01:06:52 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:30.212 01:06:52 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:30.212 01:06:52 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:06:30.212 INFO: JSON configuration test init 00:06:30.212 01:06:52 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:06:30.212 01:06:52 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:06:30.212 01:06:52 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:30.212 01:06:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.212 01:06:52 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:06:30.212 01:06:52 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:30.212 01:06:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.212 01:06:52 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:06:30.212 01:06:52 json_config -- json_config/common.sh@9 -- # local app=target 00:06:30.212 01:06:52 json_config -- json_config/common.sh@10 -- # shift 00:06:30.212 01:06:52 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:30.212 01:06:52 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:30.212 01:06:52 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:30.212 01:06:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:30.212 01:06:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:30.212 01:06:52 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=719962 00:06:30.212 01:06:52 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:30.212 Waiting for target to run... 00:06:30.212 01:06:52 json_config -- json_config/common.sh@25 -- # waitforlisten 719962 /var/tmp/spdk_tgt.sock 00:06:30.212 01:06:52 json_config -- common/autotest_common.sh@829 -- # '[' -z 719962 ']' 00:06:30.212 01:06:52 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:30.212 01:06:52 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:30.212 01:06:52 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:30.212 01:06:52 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:30.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:30.212 01:06:52 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:30.212 01:06:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.212 [2024-07-25 01:06:52.584034] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:06:30.212 [2024-07-25 01:06:52.584089] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid719962 ] 00:06:30.212 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.472 [2024-07-25 01:06:52.856066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.472 [2024-07-25 01:06:52.923734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.040 01:06:53 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.040 01:06:53 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:31.040 01:06:53 json_config -- json_config/common.sh@26 -- # echo '' 00:06:31.040 00:06:31.040 01:06:53 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:06:31.040 01:06:53 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:06:31.040 01:06:53 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:31.040 01:06:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:31.041 01:06:53 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:06:31.041 01:06:53 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:06:31.041 01:06:53 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:31.041 01:06:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:31.041 01:06:53 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:31.041 01:06:53 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:06:31.041 01:06:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:34.336 01:06:56 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:06:34.336 01:06:56 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:34.336 01:06:56 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:34.336 01:06:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:34.336 01:06:56 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:34.336 01:06:56 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:34.336 01:06:56 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:34.336 01:06:56 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:34.336 01:06:56 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:34.336 01:06:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:34.336 01:06:56 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:34.336 01:06:56 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:34.336 01:06:56 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:06:34.336 01:06:56 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:06:34.336 01:06:56 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:06:34.336 01:06:56 json_config -- json_config/json_config.sh@51 -- # sort 00:06:34.336 01:06:56 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:06:34.336 01:06:56 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:06:34.336 01:06:56 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:06:34.337 01:06:56 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:06:34.337 01:06:56 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:34.337 01:06:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:34.337 01:06:56 json_config -- json_config/json_config.sh@59 -- # return 0 00:06:34.337 01:06:56 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:34.337 01:06:56 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:34.337 01:06:56 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:06:34.337 01:06:56 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:06:34.337 01:06:56 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:06:34.337 01:06:56 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:06:34.337 01:06:56 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:34.337 01:06:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:34.337 01:06:56 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:34.337 01:06:56 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:06:34.337 01:06:56 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:06:34.337 01:06:56 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:34.337 01:06:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:34.597 MallocForNvmf0 00:06:34.597 01:06:56 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:34.597 01:06:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:34.597 MallocForNvmf1 00:06:34.597 01:06:57 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:34.597 01:06:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:34.856 [2024-07-25 01:06:57.216652] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:34.856 01:06:57 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:34.856 01:06:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:35.116 01:06:57 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:35.116 01:06:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:35.116 01:06:57 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:35.116 01:06:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:35.376 01:06:57 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:35.376 01:06:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:35.636 [2024-07-25 01:06:57.890705] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:35.636 01:06:57 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:06:35.636 01:06:57 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:35.636 01:06:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:35.636 01:06:57 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:06:35.636 01:06:57 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:35.636 01:06:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:35.636 01:06:57 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:06:35.636 01:06:57 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:35.636 01:06:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:35.896 MallocBdevForConfigChangeCheck 00:06:35.896 01:06:58 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:06:35.896 01:06:58 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:35.896 01:06:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:35.896 01:06:58 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:06:35.896 01:06:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:36.155 01:06:58 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:06:36.155 INFO: shutting down applications... 00:06:36.155 01:06:58 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:06:36.155 01:06:58 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:06:36.155 01:06:58 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:06:36.156 01:06:58 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:38.065 Calling clear_iscsi_subsystem 00:06:38.065 Calling clear_nvmf_subsystem 00:06:38.065 Calling clear_nbd_subsystem 00:06:38.065 Calling clear_ublk_subsystem 00:06:38.065 Calling clear_vhost_blk_subsystem 00:06:38.065 Calling clear_vhost_scsi_subsystem 00:06:38.065 Calling clear_bdev_subsystem 00:06:38.065 01:07:00 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:38.065 01:07:00 json_config -- json_config/json_config.sh@347 -- # count=100 00:06:38.065 01:07:00 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:06:38.065 01:07:00 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:38.065 01:07:00 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:38.065 01:07:00 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:38.065 01:07:00 json_config -- json_config/json_config.sh@349 -- # break 00:06:38.065 01:07:00 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:06:38.065 01:07:00 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:06:38.065 01:07:00 json_config -- json_config/common.sh@31 -- # local app=target 00:06:38.065 01:07:00 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:38.065 01:07:00 json_config -- json_config/common.sh@35 -- # [[ -n 719962 ]] 00:06:38.065 01:07:00 json_config -- json_config/common.sh@38 -- # kill -SIGINT 719962 00:06:38.065 01:07:00 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:38.065 01:07:00 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:38.065 01:07:00 json_config -- json_config/common.sh@41 -- # kill -0 719962 00:06:38.065 01:07:00 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:38.635 01:07:00 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:38.635 01:07:00 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:38.635 01:07:00 json_config -- json_config/common.sh@41 -- # kill -0 719962 00:06:38.635 01:07:00 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:38.635 01:07:00 json_config -- json_config/common.sh@43 -- # break 00:06:38.635 01:07:00 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:38.635 01:07:00 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:38.635 SPDK target shutdown done 00:06:38.635 01:07:00 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:06:38.635 INFO: relaunching applications... 00:06:38.635 01:07:00 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:38.635 01:07:00 json_config -- json_config/common.sh@9 -- # local app=target 00:06:38.635 01:07:00 json_config -- json_config/common.sh@10 -- # shift 00:06:38.636 01:07:00 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:38.636 01:07:00 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:38.636 01:07:00 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:38.636 01:07:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:38.636 01:07:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:38.636 01:07:00 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=721562 00:06:38.636 01:07:00 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:38.636 Waiting for target to run... 00:06:38.636 01:07:00 json_config -- json_config/common.sh@25 -- # waitforlisten 721562 /var/tmp/spdk_tgt.sock 00:06:38.636 01:07:00 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:38.636 01:07:00 json_config -- common/autotest_common.sh@829 -- # '[' -z 721562 ']' 00:06:38.636 01:07:00 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:38.636 01:07:00 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:38.636 01:07:00 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:38.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:38.636 01:07:00 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:38.636 01:07:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:38.636 [2024-07-25 01:07:00.981081] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:06:38.636 [2024-07-25 01:07:00.981141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid721562 ] 00:06:38.636 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.896 [2024-07-25 01:07:01.251735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.896 [2024-07-25 01:07:01.321288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.252 [2024-07-25 01:07:04.335799] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:42.252 [2024-07-25 01:07:04.368127] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:42.252 01:07:04 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.252 01:07:04 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:42.252 01:07:04 json_config -- json_config/common.sh@26 -- # echo '' 00:06:42.252 00:06:42.252 01:07:04 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:06:42.252 01:07:04 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:42.252 INFO: Checking if target configuration is the same... 00:06:42.252 01:07:04 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:42.252 01:07:04 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:06:42.252 01:07:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:42.252 + '[' 2 -ne 2 ']' 00:06:42.252 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:42.252 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:42.252 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:42.252 +++ basename /dev/fd/62 00:06:42.252 ++ mktemp /tmp/62.XXX 00:06:42.252 + tmp_file_1=/tmp/62.9CM 00:06:42.252 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:42.252 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:42.252 + tmp_file_2=/tmp/spdk_tgt_config.json.qlA 00:06:42.252 + ret=0 00:06:42.252 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:42.252 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:42.512 + diff -u /tmp/62.9CM /tmp/spdk_tgt_config.json.qlA 00:06:42.512 + echo 'INFO: JSON config files are the same' 00:06:42.512 INFO: JSON config files are the same 00:06:42.512 + rm /tmp/62.9CM /tmp/spdk_tgt_config.json.qlA 00:06:42.512 + exit 0 00:06:42.512 01:07:04 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:06:42.512 01:07:04 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:42.512 INFO: changing configuration and checking if this can be detected... 00:06:42.512 01:07:04 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:42.512 01:07:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:42.512 01:07:04 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:06:42.513 01:07:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:42.513 01:07:04 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:42.513 + '[' 2 -ne 2 ']' 00:06:42.513 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:42.513 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:42.513 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:42.513 +++ basename /dev/fd/62 00:06:42.513 ++ mktemp /tmp/62.XXX 00:06:42.513 + tmp_file_1=/tmp/62.olA 00:06:42.513 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:42.513 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:42.513 + tmp_file_2=/tmp/spdk_tgt_config.json.Rwl 00:06:42.513 + ret=0 00:06:42.513 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:43.083 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:43.083 + diff -u /tmp/62.olA /tmp/spdk_tgt_config.json.Rwl 00:06:43.083 + ret=1 00:06:43.083 + echo '=== Start of file: /tmp/62.olA ===' 00:06:43.083 + cat /tmp/62.olA 00:06:43.083 + echo '=== End of file: /tmp/62.olA ===' 00:06:43.083 + echo '' 00:06:43.083 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Rwl ===' 00:06:43.083 + cat /tmp/spdk_tgt_config.json.Rwl 00:06:43.083 + echo '=== End of file: /tmp/spdk_tgt_config.json.Rwl ===' 00:06:43.083 + echo '' 00:06:43.083 + rm /tmp/62.olA /tmp/spdk_tgt_config.json.Rwl 00:06:43.083 + exit 1 00:06:43.083 01:07:05 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:06:43.083 INFO: configuration change detected. 00:06:43.083 01:07:05 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:06:43.083 01:07:05 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:06:43.083 01:07:05 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:43.083 01:07:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:43.083 01:07:05 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:06:43.083 01:07:05 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:06:43.083 01:07:05 json_config -- json_config/json_config.sh@321 -- # [[ -n 721562 ]] 00:06:43.083 01:07:05 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:06:43.083 01:07:05 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:06:43.083 01:07:05 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:43.083 01:07:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:43.083 01:07:05 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:06:43.083 01:07:05 json_config -- json_config/json_config.sh@197 -- # uname -s 00:06:43.083 01:07:05 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:06:43.083 01:07:05 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:06:43.083 01:07:05 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:06:43.083 01:07:05 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:06:43.083 01:07:05 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:43.083 01:07:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:43.083 01:07:05 json_config -- json_config/json_config.sh@327 -- # killprocess 721562 00:06:43.083 01:07:05 json_config -- common/autotest_common.sh@948 -- # '[' -z 721562 ']' 00:06:43.083 01:07:05 json_config -- common/autotest_common.sh@952 -- # kill -0 721562 00:06:43.083 01:07:05 json_config -- common/autotest_common.sh@953 -- # uname 00:06:43.083 01:07:05 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:43.083 01:07:05 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 721562 00:06:43.083 01:07:05 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:43.083 01:07:05 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:43.083 01:07:05 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 721562' 00:06:43.083 killing process with pid 721562 00:06:43.083 01:07:05 json_config -- common/autotest_common.sh@967 -- # kill 721562 00:06:43.083 01:07:05 json_config -- common/autotest_common.sh@972 -- # wait 721562 00:06:44.993 01:07:06 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:44.993 01:07:06 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:06:44.993 01:07:06 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:44.993 01:07:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:44.993 01:07:06 json_config -- json_config/json_config.sh@332 -- # return 0 00:06:44.993 01:07:06 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:06:44.993 INFO: Success 00:06:44.993 00:06:44.993 real 0m14.560s 00:06:44.993 user 0m15.441s 00:06:44.993 sys 0m1.684s 00:06:44.993 01:07:07 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.993 01:07:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:44.993 ************************************ 00:06:44.993 END TEST json_config 00:06:44.993 ************************************ 00:06:44.993 01:07:07 -- common/autotest_common.sh@1142 -- # return 0 00:06:44.993 01:07:07 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:44.993 01:07:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:44.993 01:07:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.993 01:07:07 -- common/autotest_common.sh@10 -- # set +x 00:06:44.993 ************************************ 00:06:44.993 START TEST json_config_extra_key 00:06:44.993 ************************************ 00:06:44.993 01:07:07 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:44.993 01:07:07 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:44.993 01:07:07 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:44.993 01:07:07 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:44.993 01:07:07 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:44.993 01:07:07 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:44.993 01:07:07 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:44.994 01:07:07 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:44.994 01:07:07 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:44.994 01:07:07 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:44.994 01:07:07 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:44.994 01:07:07 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:44.994 01:07:07 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:44.994 01:07:07 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:44.994 01:07:07 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:44.994 01:07:07 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:44.994 01:07:07 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:44.994 01:07:07 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:44.994 01:07:07 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:44.994 01:07:07 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:44.994 01:07:07 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.994 01:07:07 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.994 01:07:07 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.994 01:07:07 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.994 01:07:07 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.994 01:07:07 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.994 01:07:07 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:44.994 01:07:07 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.994 01:07:07 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:44.994 01:07:07 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:44.994 01:07:07 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:44.994 01:07:07 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:44.994 01:07:07 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:44.994 01:07:07 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:44.994 01:07:07 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:44.994 01:07:07 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:44.994 01:07:07 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:44.994 01:07:07 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:44.994 01:07:07 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:44.994 01:07:07 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:44.994 01:07:07 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:44.994 01:07:07 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:44.994 01:07:07 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:44.994 01:07:07 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:44.994 01:07:07 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:44.994 01:07:07 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:44.994 01:07:07 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:44.994 01:07:07 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:44.994 INFO: launching applications... 00:06:44.994 01:07:07 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:44.994 01:07:07 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:44.994 01:07:07 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:44.994 01:07:07 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:44.994 01:07:07 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:44.994 01:07:07 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:44.994 01:07:07 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:44.994 01:07:07 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:44.994 01:07:07 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=722734 00:06:44.994 01:07:07 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:44.994 Waiting for target to run... 00:06:44.994 01:07:07 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 722734 /var/tmp/spdk_tgt.sock 00:06:44.994 01:07:07 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 722734 ']' 00:06:44.994 01:07:07 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:44.994 01:07:07 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:44.994 01:07:07 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:44.994 01:07:07 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:44.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:44.994 01:07:07 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:44.994 01:07:07 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:44.994 [2024-07-25 01:07:07.220232] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:06:44.994 [2024-07-25 01:07:07.220286] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid722734 ] 00:06:44.994 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.254 [2024-07-25 01:07:07.649587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.254 [2024-07-25 01:07:07.733133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.824 01:07:08 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.824 01:07:08 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:45.824 01:07:08 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:45.824 00:06:45.824 01:07:08 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:45.824 INFO: shutting down applications... 00:06:45.824 01:07:08 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:45.824 01:07:08 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:45.824 01:07:08 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:45.824 01:07:08 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 722734 ]] 00:06:45.824 01:07:08 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 722734 00:06:45.824 01:07:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:45.824 01:07:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:45.824 01:07:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 722734 00:06:45.824 01:07:08 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:46.084 01:07:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:46.084 01:07:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:46.084 01:07:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 722734 00:06:46.084 01:07:08 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:46.084 01:07:08 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:46.084 01:07:08 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:46.084 01:07:08 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:46.084 SPDK target shutdown done 00:06:46.084 01:07:08 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:46.084 Success 00:06:46.084 00:06:46.084 real 0m1.448s 00:06:46.084 user 0m1.090s 00:06:46.084 sys 0m0.509s 00:06:46.084 01:07:08 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.084 01:07:08 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:46.084 ************************************ 00:06:46.084 END TEST json_config_extra_key 00:06:46.084 ************************************ 00:06:46.084 01:07:08 -- common/autotest_common.sh@1142 -- # return 0 00:06:46.084 01:07:08 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:46.084 01:07:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:46.084 01:07:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.084 01:07:08 -- common/autotest_common.sh@10 -- # set +x 00:06:46.345 ************************************ 00:06:46.345 START TEST alias_rpc 00:06:46.345 ************************************ 00:06:46.345 01:07:08 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:46.345 * Looking for test storage... 00:06:46.345 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:46.345 01:07:08 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:46.345 01:07:08 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=723014 00:06:46.345 01:07:08 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:46.345 01:07:08 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 723014 00:06:46.345 01:07:08 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 723014 ']' 00:06:46.345 01:07:08 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.345 01:07:08 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:46.345 01:07:08 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.345 01:07:08 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:46.345 01:07:08 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.345 [2024-07-25 01:07:08.718187] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:06:46.345 [2024-07-25 01:07:08.718236] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid723014 ] 00:06:46.345 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.345 [2024-07-25 01:07:08.773177] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.605 [2024-07-25 01:07:08.847206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.175 01:07:09 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:47.175 01:07:09 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:47.175 01:07:09 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:47.436 01:07:09 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 723014 00:06:47.436 01:07:09 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 723014 ']' 00:06:47.436 01:07:09 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 723014 00:06:47.436 01:07:09 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:47.436 01:07:09 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:47.436 01:07:09 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 723014 00:06:47.436 01:07:09 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:47.436 01:07:09 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:47.436 01:07:09 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 723014' 00:06:47.436 killing process with pid 723014 00:06:47.436 01:07:09 alias_rpc -- common/autotest_common.sh@967 -- # kill 723014 00:06:47.436 01:07:09 alias_rpc -- common/autotest_common.sh@972 -- # wait 723014 00:06:47.696 00:06:47.696 real 0m1.474s 00:06:47.696 user 0m1.611s 00:06:47.696 sys 0m0.397s 00:06:47.696 01:07:10 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.696 01:07:10 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.696 ************************************ 00:06:47.696 END TEST alias_rpc 00:06:47.696 ************************************ 00:06:47.696 01:07:10 -- common/autotest_common.sh@1142 -- # return 0 00:06:47.696 01:07:10 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:47.696 01:07:10 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:47.696 01:07:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:47.696 01:07:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.696 01:07:10 -- common/autotest_common.sh@10 -- # set +x 00:06:47.696 ************************************ 00:06:47.696 START TEST spdkcli_tcp 00:06:47.696 ************************************ 00:06:47.696 01:07:10 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:47.956 * Looking for test storage... 00:06:47.956 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:47.956 01:07:10 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:47.956 01:07:10 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:47.956 01:07:10 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:47.956 01:07:10 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:47.956 01:07:10 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:47.956 01:07:10 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:47.956 01:07:10 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:47.956 01:07:10 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:47.956 01:07:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:47.956 01:07:10 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=723306 00:06:47.956 01:07:10 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 723306 00:06:47.956 01:07:10 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:47.956 01:07:10 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 723306 ']' 00:06:47.956 01:07:10 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.956 01:07:10 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.956 01:07:10 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.956 01:07:10 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.956 01:07:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:47.956 [2024-07-25 01:07:10.269437] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:06:47.956 [2024-07-25 01:07:10.269487] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid723306 ] 00:06:47.956 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.956 [2024-07-25 01:07:10.321958] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:47.956 [2024-07-25 01:07:10.401853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.956 [2024-07-25 01:07:10.401855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.897 01:07:11 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.897 01:07:11 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:48.897 01:07:11 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=723538 00:06:48.897 01:07:11 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:48.897 01:07:11 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:48.897 [ 00:06:48.897 "bdev_malloc_delete", 00:06:48.897 "bdev_malloc_create", 00:06:48.897 "bdev_null_resize", 00:06:48.897 "bdev_null_delete", 00:06:48.897 "bdev_null_create", 00:06:48.897 "bdev_nvme_cuse_unregister", 00:06:48.897 "bdev_nvme_cuse_register", 00:06:48.897 "bdev_opal_new_user", 00:06:48.897 "bdev_opal_set_lock_state", 00:06:48.897 "bdev_opal_delete", 00:06:48.897 "bdev_opal_get_info", 00:06:48.897 "bdev_opal_create", 00:06:48.897 "bdev_nvme_opal_revert", 00:06:48.897 "bdev_nvme_opal_init", 00:06:48.897 "bdev_nvme_send_cmd", 00:06:48.897 "bdev_nvme_get_path_iostat", 00:06:48.897 "bdev_nvme_get_mdns_discovery_info", 00:06:48.897 "bdev_nvme_stop_mdns_discovery", 00:06:48.897 "bdev_nvme_start_mdns_discovery", 00:06:48.897 "bdev_nvme_set_multipath_policy", 00:06:48.897 "bdev_nvme_set_preferred_path", 00:06:48.897 "bdev_nvme_get_io_paths", 00:06:48.897 "bdev_nvme_remove_error_injection", 00:06:48.897 "bdev_nvme_add_error_injection", 00:06:48.897 "bdev_nvme_get_discovery_info", 00:06:48.897 "bdev_nvme_stop_discovery", 00:06:48.897 "bdev_nvme_start_discovery", 00:06:48.897 "bdev_nvme_get_controller_health_info", 00:06:48.897 "bdev_nvme_disable_controller", 00:06:48.897 "bdev_nvme_enable_controller", 00:06:48.897 "bdev_nvme_reset_controller", 00:06:48.897 "bdev_nvme_get_transport_statistics", 00:06:48.897 "bdev_nvme_apply_firmware", 00:06:48.897 "bdev_nvme_detach_controller", 00:06:48.897 "bdev_nvme_get_controllers", 00:06:48.897 "bdev_nvme_attach_controller", 00:06:48.897 "bdev_nvme_set_hotplug", 00:06:48.897 "bdev_nvme_set_options", 00:06:48.897 "bdev_passthru_delete", 00:06:48.897 "bdev_passthru_create", 00:06:48.897 "bdev_lvol_set_parent_bdev", 00:06:48.897 "bdev_lvol_set_parent", 00:06:48.897 "bdev_lvol_check_shallow_copy", 00:06:48.897 "bdev_lvol_start_shallow_copy", 00:06:48.897 "bdev_lvol_grow_lvstore", 00:06:48.897 "bdev_lvol_get_lvols", 00:06:48.897 "bdev_lvol_get_lvstores", 00:06:48.897 "bdev_lvol_delete", 00:06:48.897 "bdev_lvol_set_read_only", 00:06:48.897 "bdev_lvol_resize", 00:06:48.897 "bdev_lvol_decouple_parent", 00:06:48.897 "bdev_lvol_inflate", 00:06:48.897 "bdev_lvol_rename", 00:06:48.897 "bdev_lvol_clone_bdev", 00:06:48.897 "bdev_lvol_clone", 00:06:48.897 "bdev_lvol_snapshot", 00:06:48.897 "bdev_lvol_create", 00:06:48.897 "bdev_lvol_delete_lvstore", 00:06:48.897 "bdev_lvol_rename_lvstore", 00:06:48.897 "bdev_lvol_create_lvstore", 00:06:48.897 "bdev_raid_set_options", 00:06:48.897 "bdev_raid_remove_base_bdev", 00:06:48.897 "bdev_raid_add_base_bdev", 00:06:48.897 "bdev_raid_delete", 00:06:48.897 "bdev_raid_create", 00:06:48.897 "bdev_raid_get_bdevs", 00:06:48.897 "bdev_error_inject_error", 00:06:48.897 "bdev_error_delete", 00:06:48.897 "bdev_error_create", 00:06:48.897 "bdev_split_delete", 00:06:48.897 "bdev_split_create", 00:06:48.897 "bdev_delay_delete", 00:06:48.897 "bdev_delay_create", 00:06:48.898 "bdev_delay_update_latency", 00:06:48.898 "bdev_zone_block_delete", 00:06:48.898 "bdev_zone_block_create", 00:06:48.898 "blobfs_create", 00:06:48.898 "blobfs_detect", 00:06:48.898 "blobfs_set_cache_size", 00:06:48.898 "bdev_aio_delete", 00:06:48.898 "bdev_aio_rescan", 00:06:48.898 "bdev_aio_create", 00:06:48.898 "bdev_ftl_set_property", 00:06:48.898 "bdev_ftl_get_properties", 00:06:48.898 "bdev_ftl_get_stats", 00:06:48.898 "bdev_ftl_unmap", 00:06:48.898 "bdev_ftl_unload", 00:06:48.898 "bdev_ftl_delete", 00:06:48.898 "bdev_ftl_load", 00:06:48.898 "bdev_ftl_create", 00:06:48.898 "bdev_virtio_attach_controller", 00:06:48.898 "bdev_virtio_scsi_get_devices", 00:06:48.898 "bdev_virtio_detach_controller", 00:06:48.898 "bdev_virtio_blk_set_hotplug", 00:06:48.898 "bdev_iscsi_delete", 00:06:48.898 "bdev_iscsi_create", 00:06:48.898 "bdev_iscsi_set_options", 00:06:48.898 "accel_error_inject_error", 00:06:48.898 "ioat_scan_accel_module", 00:06:48.898 "dsa_scan_accel_module", 00:06:48.898 "iaa_scan_accel_module", 00:06:48.898 "vfu_virtio_create_scsi_endpoint", 00:06:48.898 "vfu_virtio_scsi_remove_target", 00:06:48.898 "vfu_virtio_scsi_add_target", 00:06:48.898 "vfu_virtio_create_blk_endpoint", 00:06:48.898 "vfu_virtio_delete_endpoint", 00:06:48.898 "keyring_file_remove_key", 00:06:48.898 "keyring_file_add_key", 00:06:48.898 "keyring_linux_set_options", 00:06:48.898 "iscsi_get_histogram", 00:06:48.898 "iscsi_enable_histogram", 00:06:48.898 "iscsi_set_options", 00:06:48.898 "iscsi_get_auth_groups", 00:06:48.898 "iscsi_auth_group_remove_secret", 00:06:48.898 "iscsi_auth_group_add_secret", 00:06:48.898 "iscsi_delete_auth_group", 00:06:48.898 "iscsi_create_auth_group", 00:06:48.898 "iscsi_set_discovery_auth", 00:06:48.898 "iscsi_get_options", 00:06:48.898 "iscsi_target_node_request_logout", 00:06:48.898 "iscsi_target_node_set_redirect", 00:06:48.898 "iscsi_target_node_set_auth", 00:06:48.898 "iscsi_target_node_add_lun", 00:06:48.898 "iscsi_get_stats", 00:06:48.898 "iscsi_get_connections", 00:06:48.898 "iscsi_portal_group_set_auth", 00:06:48.898 "iscsi_start_portal_group", 00:06:48.898 "iscsi_delete_portal_group", 00:06:48.898 "iscsi_create_portal_group", 00:06:48.898 "iscsi_get_portal_groups", 00:06:48.898 "iscsi_delete_target_node", 00:06:48.898 "iscsi_target_node_remove_pg_ig_maps", 00:06:48.898 "iscsi_target_node_add_pg_ig_maps", 00:06:48.898 "iscsi_create_target_node", 00:06:48.898 "iscsi_get_target_nodes", 00:06:48.898 "iscsi_delete_initiator_group", 00:06:48.898 "iscsi_initiator_group_remove_initiators", 00:06:48.898 "iscsi_initiator_group_add_initiators", 00:06:48.898 "iscsi_create_initiator_group", 00:06:48.898 "iscsi_get_initiator_groups", 00:06:48.898 "nvmf_set_crdt", 00:06:48.898 "nvmf_set_config", 00:06:48.898 "nvmf_set_max_subsystems", 00:06:48.898 "nvmf_stop_mdns_prr", 00:06:48.898 "nvmf_publish_mdns_prr", 00:06:48.898 "nvmf_subsystem_get_listeners", 00:06:48.898 "nvmf_subsystem_get_qpairs", 00:06:48.898 "nvmf_subsystem_get_controllers", 00:06:48.898 "nvmf_get_stats", 00:06:48.898 "nvmf_get_transports", 00:06:48.898 "nvmf_create_transport", 00:06:48.898 "nvmf_get_targets", 00:06:48.898 "nvmf_delete_target", 00:06:48.898 "nvmf_create_target", 00:06:48.898 "nvmf_subsystem_allow_any_host", 00:06:48.898 "nvmf_subsystem_remove_host", 00:06:48.898 "nvmf_subsystem_add_host", 00:06:48.898 "nvmf_ns_remove_host", 00:06:48.898 "nvmf_ns_add_host", 00:06:48.898 "nvmf_subsystem_remove_ns", 00:06:48.898 "nvmf_subsystem_add_ns", 00:06:48.898 "nvmf_subsystem_listener_set_ana_state", 00:06:48.898 "nvmf_discovery_get_referrals", 00:06:48.898 "nvmf_discovery_remove_referral", 00:06:48.898 "nvmf_discovery_add_referral", 00:06:48.898 "nvmf_subsystem_remove_listener", 00:06:48.898 "nvmf_subsystem_add_listener", 00:06:48.898 "nvmf_delete_subsystem", 00:06:48.898 "nvmf_create_subsystem", 00:06:48.898 "nvmf_get_subsystems", 00:06:48.898 "env_dpdk_get_mem_stats", 00:06:48.898 "nbd_get_disks", 00:06:48.898 "nbd_stop_disk", 00:06:48.898 "nbd_start_disk", 00:06:48.898 "ublk_recover_disk", 00:06:48.898 "ublk_get_disks", 00:06:48.898 "ublk_stop_disk", 00:06:48.898 "ublk_start_disk", 00:06:48.898 "ublk_destroy_target", 00:06:48.898 "ublk_create_target", 00:06:48.898 "virtio_blk_create_transport", 00:06:48.898 "virtio_blk_get_transports", 00:06:48.898 "vhost_controller_set_coalescing", 00:06:48.898 "vhost_get_controllers", 00:06:48.898 "vhost_delete_controller", 00:06:48.898 "vhost_create_blk_controller", 00:06:48.898 "vhost_scsi_controller_remove_target", 00:06:48.898 "vhost_scsi_controller_add_target", 00:06:48.898 "vhost_start_scsi_controller", 00:06:48.898 "vhost_create_scsi_controller", 00:06:48.898 "thread_set_cpumask", 00:06:48.898 "framework_get_governor", 00:06:48.898 "framework_get_scheduler", 00:06:48.898 "framework_set_scheduler", 00:06:48.898 "framework_get_reactors", 00:06:48.898 "thread_get_io_channels", 00:06:48.898 "thread_get_pollers", 00:06:48.898 "thread_get_stats", 00:06:48.898 "framework_monitor_context_switch", 00:06:48.898 "spdk_kill_instance", 00:06:48.898 "log_enable_timestamps", 00:06:48.898 "log_get_flags", 00:06:48.898 "log_clear_flag", 00:06:48.898 "log_set_flag", 00:06:48.898 "log_get_level", 00:06:48.898 "log_set_level", 00:06:48.898 "log_get_print_level", 00:06:48.898 "log_set_print_level", 00:06:48.898 "framework_enable_cpumask_locks", 00:06:48.898 "framework_disable_cpumask_locks", 00:06:48.898 "framework_wait_init", 00:06:48.898 "framework_start_init", 00:06:48.898 "scsi_get_devices", 00:06:48.898 "bdev_get_histogram", 00:06:48.898 "bdev_enable_histogram", 00:06:48.898 "bdev_set_qos_limit", 00:06:48.898 "bdev_set_qd_sampling_period", 00:06:48.898 "bdev_get_bdevs", 00:06:48.898 "bdev_reset_iostat", 00:06:48.898 "bdev_get_iostat", 00:06:48.898 "bdev_examine", 00:06:48.898 "bdev_wait_for_examine", 00:06:48.898 "bdev_set_options", 00:06:48.898 "notify_get_notifications", 00:06:48.898 "notify_get_types", 00:06:48.898 "accel_get_stats", 00:06:48.898 "accel_set_options", 00:06:48.898 "accel_set_driver", 00:06:48.898 "accel_crypto_key_destroy", 00:06:48.898 "accel_crypto_keys_get", 00:06:48.898 "accel_crypto_key_create", 00:06:48.898 "accel_assign_opc", 00:06:48.898 "accel_get_module_info", 00:06:48.898 "accel_get_opc_assignments", 00:06:48.898 "vmd_rescan", 00:06:48.898 "vmd_remove_device", 00:06:48.898 "vmd_enable", 00:06:48.898 "sock_get_default_impl", 00:06:48.898 "sock_set_default_impl", 00:06:48.898 "sock_impl_set_options", 00:06:48.898 "sock_impl_get_options", 00:06:48.898 "iobuf_get_stats", 00:06:48.898 "iobuf_set_options", 00:06:48.898 "keyring_get_keys", 00:06:48.898 "framework_get_pci_devices", 00:06:48.898 "framework_get_config", 00:06:48.898 "framework_get_subsystems", 00:06:48.898 "vfu_tgt_set_base_path", 00:06:48.898 "trace_get_info", 00:06:48.898 "trace_get_tpoint_group_mask", 00:06:48.898 "trace_disable_tpoint_group", 00:06:48.898 "trace_enable_tpoint_group", 00:06:48.898 "trace_clear_tpoint_mask", 00:06:48.898 "trace_set_tpoint_mask", 00:06:48.898 "spdk_get_version", 00:06:48.898 "rpc_get_methods" 00:06:48.898 ] 00:06:48.898 01:07:11 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:48.898 01:07:11 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:48.898 01:07:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:48.898 01:07:11 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:48.898 01:07:11 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 723306 00:06:48.898 01:07:11 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 723306 ']' 00:06:48.898 01:07:11 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 723306 00:06:48.898 01:07:11 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:48.898 01:07:11 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:48.898 01:07:11 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 723306 00:06:48.898 01:07:11 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:48.898 01:07:11 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:48.898 01:07:11 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 723306' 00:06:48.898 killing process with pid 723306 00:06:48.898 01:07:11 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 723306 00:06:48.898 01:07:11 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 723306 00:06:49.157 00:06:49.157 real 0m1.499s 00:06:49.157 user 0m2.785s 00:06:49.157 sys 0m0.426s 00:06:49.157 01:07:11 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.157 01:07:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:49.157 ************************************ 00:06:49.157 END TEST spdkcli_tcp 00:06:49.157 ************************************ 00:06:49.417 01:07:11 -- common/autotest_common.sh@1142 -- # return 0 00:06:49.417 01:07:11 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:49.417 01:07:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:49.417 01:07:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.417 01:07:11 -- common/autotest_common.sh@10 -- # set +x 00:06:49.417 ************************************ 00:06:49.417 START TEST dpdk_mem_utility 00:06:49.417 ************************************ 00:06:49.417 01:07:11 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:49.417 * Looking for test storage... 00:06:49.417 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:49.417 01:07:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:49.417 01:07:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=723610 00:06:49.417 01:07:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 723610 00:06:49.417 01:07:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:49.417 01:07:11 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 723610 ']' 00:06:49.417 01:07:11 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.417 01:07:11 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:49.417 01:07:11 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.417 01:07:11 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:49.417 01:07:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:49.417 [2024-07-25 01:07:11.816221] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:06:49.417 [2024-07-25 01:07:11.816273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid723610 ] 00:06:49.417 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.417 [2024-07-25 01:07:11.870299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.677 [2024-07-25 01:07:11.951341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.248 01:07:12 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:50.248 01:07:12 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:50.248 01:07:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:50.248 01:07:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:50.248 01:07:12 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.248 01:07:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:50.248 { 00:06:50.248 "filename": "/tmp/spdk_mem_dump.txt" 00:06:50.248 } 00:06:50.248 01:07:12 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.248 01:07:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:50.248 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:50.248 1 heaps totaling size 814.000000 MiB 00:06:50.248 size: 814.000000 MiB heap id: 0 00:06:50.248 end heaps---------- 00:06:50.248 8 mempools totaling size 598.116089 MiB 00:06:50.248 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:50.248 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:50.248 size: 84.521057 MiB name: bdev_io_723610 00:06:50.248 size: 51.011292 MiB name: evtpool_723610 00:06:50.248 size: 50.003479 MiB name: msgpool_723610 00:06:50.248 size: 21.763794 MiB name: PDU_Pool 00:06:50.248 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:50.248 size: 0.026123 MiB name: Session_Pool 00:06:50.248 end mempools------- 00:06:50.248 6 memzones totaling size 4.142822 MiB 00:06:50.248 size: 1.000366 MiB name: RG_ring_0_723610 00:06:50.248 size: 1.000366 MiB name: RG_ring_1_723610 00:06:50.248 size: 1.000366 MiB name: RG_ring_4_723610 00:06:50.248 size: 1.000366 MiB name: RG_ring_5_723610 00:06:50.248 size: 0.125366 MiB name: RG_ring_2_723610 00:06:50.248 size: 0.015991 MiB name: RG_ring_3_723610 00:06:50.248 end memzones------- 00:06:50.248 01:07:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:50.248 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:50.248 list of free elements. size: 12.519348 MiB 00:06:50.248 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:50.248 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:50.248 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:50.248 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:50.248 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:50.248 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:50.248 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:50.248 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:50.248 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:50.248 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:50.248 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:50.248 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:50.248 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:50.248 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:50.248 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:50.248 list of standard malloc elements. size: 199.218079 MiB 00:06:50.248 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:50.248 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:50.248 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:50.248 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:50.248 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:50.248 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:50.248 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:50.248 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:50.248 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:50.248 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:50.248 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:50.248 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:50.248 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:50.248 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:50.248 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:50.248 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:50.248 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:50.248 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:50.248 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:50.248 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:50.248 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:50.248 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:50.248 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:50.248 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:50.248 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:50.248 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:50.248 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:50.248 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:50.248 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:50.248 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:50.248 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:50.248 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:50.248 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:50.248 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:50.248 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:50.248 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:50.248 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:50.248 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:50.248 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:50.248 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:50.248 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:50.248 list of memzone associated elements. size: 602.262573 MiB 00:06:50.248 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:50.248 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:50.248 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:50.248 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:50.248 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:50.248 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_723610_0 00:06:50.248 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:50.248 associated memzone info: size: 48.002930 MiB name: MP_evtpool_723610_0 00:06:50.248 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:50.248 associated memzone info: size: 48.002930 MiB name: MP_msgpool_723610_0 00:06:50.248 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:50.248 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:50.248 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:50.248 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:50.248 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:50.248 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_723610 00:06:50.248 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:50.248 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_723610 00:06:50.248 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:50.248 associated memzone info: size: 1.007996 MiB name: MP_evtpool_723610 00:06:50.248 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:50.248 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:50.248 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:50.248 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:50.248 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:50.249 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:50.249 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:50.249 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:50.249 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:50.249 associated memzone info: size: 1.000366 MiB name: RG_ring_0_723610 00:06:50.249 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:50.249 associated memzone info: size: 1.000366 MiB name: RG_ring_1_723610 00:06:50.249 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:50.249 associated memzone info: size: 1.000366 MiB name: RG_ring_4_723610 00:06:50.249 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:50.249 associated memzone info: size: 1.000366 MiB name: RG_ring_5_723610 00:06:50.249 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:50.249 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_723610 00:06:50.249 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:50.249 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:50.249 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:50.249 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:50.249 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:50.249 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:50.249 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:50.249 associated memzone info: size: 0.125366 MiB name: RG_ring_2_723610 00:06:50.249 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:50.249 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:50.249 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:50.249 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:50.249 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:50.249 associated memzone info: size: 0.015991 MiB name: RG_ring_3_723610 00:06:50.249 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:50.249 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:50.249 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:50.249 associated memzone info: size: 0.000183 MiB name: MP_msgpool_723610 00:06:50.249 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:50.249 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_723610 00:06:50.249 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:50.249 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:50.249 01:07:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:50.249 01:07:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 723610 00:06:50.249 01:07:12 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 723610 ']' 00:06:50.249 01:07:12 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 723610 00:06:50.249 01:07:12 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:50.249 01:07:12 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:50.249 01:07:12 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 723610 00:06:50.509 01:07:12 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:50.509 01:07:12 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:50.509 01:07:12 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 723610' 00:06:50.509 killing process with pid 723610 00:06:50.509 01:07:12 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 723610 00:06:50.509 01:07:12 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 723610 00:06:50.769 00:06:50.769 real 0m1.368s 00:06:50.769 user 0m1.453s 00:06:50.769 sys 0m0.369s 00:06:50.769 01:07:13 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.769 01:07:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:50.769 ************************************ 00:06:50.769 END TEST dpdk_mem_utility 00:06:50.769 ************************************ 00:06:50.769 01:07:13 -- common/autotest_common.sh@1142 -- # return 0 00:06:50.769 01:07:13 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:50.769 01:07:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:50.769 01:07:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.769 01:07:13 -- common/autotest_common.sh@10 -- # set +x 00:06:50.769 ************************************ 00:06:50.769 START TEST event 00:06:50.769 ************************************ 00:06:50.769 01:07:13 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:50.769 * Looking for test storage... 00:06:50.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:50.769 01:07:13 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:50.769 01:07:13 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:50.769 01:07:13 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:50.769 01:07:13 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:50.769 01:07:13 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.769 01:07:13 event -- common/autotest_common.sh@10 -- # set +x 00:06:50.769 ************************************ 00:06:50.769 START TEST event_perf 00:06:50.769 ************************************ 00:06:50.769 01:07:13 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:51.046 Running I/O for 1 seconds...[2024-07-25 01:07:13.275357] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:06:51.046 [2024-07-25 01:07:13.275429] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid723964 ] 00:06:51.046 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.046 [2024-07-25 01:07:13.334000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:51.046 [2024-07-25 01:07:13.413799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.046 [2024-07-25 01:07:13.413886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.046 [2024-07-25 01:07:13.413978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:51.046 [2024-07-25 01:07:13.413980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.987 Running I/O for 1 seconds... 00:06:51.987 lcore 0: 208746 00:06:51.987 lcore 1: 208745 00:06:51.987 lcore 2: 208746 00:06:51.987 lcore 3: 208746 00:06:52.247 done. 00:06:52.247 00:06:52.247 real 0m1.231s 00:06:52.247 user 0m4.143s 00:06:52.247 sys 0m0.086s 00:06:52.247 01:07:14 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.247 01:07:14 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:52.247 ************************************ 00:06:52.247 END TEST event_perf 00:06:52.247 ************************************ 00:06:52.247 01:07:14 event -- common/autotest_common.sh@1142 -- # return 0 00:06:52.247 01:07:14 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:52.247 01:07:14 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:52.247 01:07:14 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.247 01:07:14 event -- common/autotest_common.sh@10 -- # set +x 00:06:52.247 ************************************ 00:06:52.247 START TEST event_reactor 00:06:52.247 ************************************ 00:06:52.247 01:07:14 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:52.248 [2024-07-25 01:07:14.572943] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:06:52.248 [2024-07-25 01:07:14.573034] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid724174 ] 00:06:52.248 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.248 [2024-07-25 01:07:14.632302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.248 [2024-07-25 01:07:14.704235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.627 test_start 00:06:53.627 oneshot 00:06:53.627 tick 100 00:06:53.627 tick 100 00:06:53.627 tick 250 00:06:53.627 tick 100 00:06:53.627 tick 100 00:06:53.627 tick 100 00:06:53.627 tick 250 00:06:53.627 tick 500 00:06:53.627 tick 100 00:06:53.627 tick 100 00:06:53.627 tick 250 00:06:53.627 tick 100 00:06:53.627 tick 100 00:06:53.627 test_end 00:06:53.627 00:06:53.627 real 0m1.219s 00:06:53.627 user 0m1.149s 00:06:53.627 sys 0m0.066s 00:06:53.627 01:07:15 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.627 01:07:15 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:53.627 ************************************ 00:06:53.627 END TEST event_reactor 00:06:53.627 ************************************ 00:06:53.627 01:07:15 event -- common/autotest_common.sh@1142 -- # return 0 00:06:53.627 01:07:15 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:53.627 01:07:15 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:53.627 01:07:15 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.627 01:07:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:53.627 ************************************ 00:06:53.627 START TEST event_reactor_perf 00:06:53.627 ************************************ 00:06:53.627 01:07:15 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:53.627 [2024-07-25 01:07:15.851741] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:06:53.627 [2024-07-25 01:07:15.851801] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid724398 ] 00:06:53.627 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.627 [2024-07-25 01:07:15.910049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.627 [2024-07-25 01:07:15.981222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.568 test_start 00:06:54.568 test_end 00:06:54.568 Performance: 501123 events per second 00:06:54.568 00:06:54.568 real 0m1.220s 00:06:54.568 user 0m1.140s 00:06:54.568 sys 0m0.076s 00:06:54.568 01:07:17 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.568 01:07:17 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:54.568 ************************************ 00:06:54.568 END TEST event_reactor_perf 00:06:54.568 ************************************ 00:06:54.828 01:07:17 event -- common/autotest_common.sh@1142 -- # return 0 00:06:54.828 01:07:17 event -- event/event.sh@49 -- # uname -s 00:06:54.828 01:07:17 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:54.828 01:07:17 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:54.828 01:07:17 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:54.828 01:07:17 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.828 01:07:17 event -- common/autotest_common.sh@10 -- # set +x 00:06:54.828 ************************************ 00:06:54.828 START TEST event_scheduler 00:06:54.828 ************************************ 00:06:54.828 01:07:17 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:54.828 * Looking for test storage... 00:06:54.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:54.828 01:07:17 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:54.828 01:07:17 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=724679 00:06:54.828 01:07:17 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:54.828 01:07:17 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:54.828 01:07:17 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 724679 00:06:54.828 01:07:17 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 724679 ']' 00:06:54.828 01:07:17 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.828 01:07:17 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:54.828 01:07:17 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.828 01:07:17 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:54.828 01:07:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:54.828 [2024-07-25 01:07:17.235773] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:06:54.828 [2024-07-25 01:07:17.235826] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid724679 ] 00:06:54.828 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.828 [2024-07-25 01:07:17.288496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:55.088 [2024-07-25 01:07:17.366504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.089 [2024-07-25 01:07:17.366590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.089 [2024-07-25 01:07:17.366695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:55.089 [2024-07-25 01:07:17.366697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.663 01:07:18 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:55.663 01:07:18 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:55.663 01:07:18 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:55.663 01:07:18 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.663 01:07:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:55.663 [2024-07-25 01:07:18.057141] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:55.663 [2024-07-25 01:07:18.057159] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:55.663 [2024-07-25 01:07:18.057168] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:55.663 [2024-07-25 01:07:18.057177] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:55.663 [2024-07-25 01:07:18.057182] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:55.663 01:07:18 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.663 01:07:18 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:55.663 01:07:18 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.663 01:07:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:55.663 [2024-07-25 01:07:18.128973] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:55.663 01:07:18 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.663 01:07:18 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:55.663 01:07:18 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:55.663 01:07:18 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.663 01:07:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:55.978 ************************************ 00:06:55.978 START TEST scheduler_create_thread 00:06:55.978 ************************************ 00:06:55.978 01:07:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:55.978 01:07:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:55.978 01:07:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.978 01:07:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.978 2 00:06:55.978 01:07:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.978 01:07:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:55.978 01:07:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.978 01:07:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.978 3 00:06:55.978 01:07:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.978 01:07:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:55.978 01:07:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.978 01:07:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.978 4 00:06:55.978 01:07:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.978 01:07:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:55.978 01:07:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.978 01:07:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.978 5 00:06:55.978 01:07:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.978 01:07:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:55.978 01:07:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.978 01:07:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.978 6 00:06:55.978 01:07:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.978 01:07:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:55.978 01:07:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.978 01:07:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.978 7 00:06:55.978 01:07:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.978 01:07:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:55.978 01:07:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.978 01:07:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.978 8 00:06:55.978 01:07:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.978 01:07:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:55.978 01:07:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.978 01:07:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.978 9 00:06:55.978 01:07:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.978 01:07:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:55.978 01:07:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.978 01:07:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.978 10 00:06:55.978 01:07:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.978 01:07:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:55.978 01:07:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.978 01:07:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.979 01:07:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.979 01:07:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:55.979 01:07:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:55.979 01:07:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.979 01:07:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.549 01:07:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.549 01:07:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:56.549 01:07:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.549 01:07:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.930 01:07:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.930 01:07:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:57.930 01:07:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:57.930 01:07:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.930 01:07:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.868 01:07:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.868 00:06:58.868 real 0m3.102s 00:06:58.868 user 0m0.021s 00:06:58.868 sys 0m0.008s 00:06:58.868 01:07:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.868 01:07:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.868 ************************************ 00:06:58.868 END TEST scheduler_create_thread 00:06:58.868 ************************************ 00:06:58.868 01:07:21 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:58.868 01:07:21 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:58.868 01:07:21 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 724679 00:06:58.868 01:07:21 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 724679 ']' 00:06:58.868 01:07:21 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 724679 00:06:58.868 01:07:21 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:58.868 01:07:21 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:58.868 01:07:21 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 724679 00:06:58.868 01:07:21 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:58.868 01:07:21 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:58.868 01:07:21 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 724679' 00:06:58.868 killing process with pid 724679 00:06:58.868 01:07:21 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 724679 00:06:58.868 01:07:21 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 724679 00:06:59.438 [2024-07-25 01:07:21.644412] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:59.438 00:06:59.438 real 0m4.746s 00:06:59.438 user 0m9.292s 00:06:59.438 sys 0m0.347s 00:06:59.438 01:07:21 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.438 01:07:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:59.438 ************************************ 00:06:59.438 END TEST event_scheduler 00:06:59.438 ************************************ 00:06:59.438 01:07:21 event -- common/autotest_common.sh@1142 -- # return 0 00:06:59.438 01:07:21 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:59.438 01:07:21 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:59.438 01:07:21 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:59.438 01:07:21 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.438 01:07:21 event -- common/autotest_common.sh@10 -- # set +x 00:06:59.438 ************************************ 00:06:59.438 START TEST app_repeat 00:06:59.438 ************************************ 00:06:59.438 01:07:21 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:59.438 01:07:21 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.698 01:07:21 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.698 01:07:21 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:59.698 01:07:21 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:59.698 01:07:21 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:59.698 01:07:21 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:59.698 01:07:21 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:59.698 01:07:21 event.app_repeat -- event/event.sh@19 -- # repeat_pid=725642 00:06:59.698 01:07:21 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:59.698 01:07:21 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:59.698 01:07:21 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 725642' 00:06:59.698 Process app_repeat pid: 725642 00:06:59.698 01:07:21 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:59.698 01:07:21 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:59.698 spdk_app_start Round 0 00:06:59.698 01:07:21 event.app_repeat -- event/event.sh@25 -- # waitforlisten 725642 /var/tmp/spdk-nbd.sock 00:06:59.698 01:07:21 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 725642 ']' 00:06:59.698 01:07:21 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:59.698 01:07:21 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:59.698 01:07:21 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:59.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:59.698 01:07:21 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:59.698 01:07:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:59.698 [2024-07-25 01:07:21.965975] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:06:59.698 [2024-07-25 01:07:21.966029] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid725642 ] 00:06:59.698 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.698 [2024-07-25 01:07:22.023011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:59.698 [2024-07-25 01:07:22.096152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.698 [2024-07-25 01:07:22.096156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.636 01:07:22 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.636 01:07:22 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:00.636 01:07:22 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:00.636 Malloc0 00:07:00.636 01:07:22 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:00.896 Malloc1 00:07:00.896 01:07:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:00.896 01:07:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.896 01:07:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:00.896 01:07:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:00.896 01:07:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.896 01:07:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:00.896 01:07:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:00.896 01:07:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.896 01:07:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:00.896 01:07:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:00.896 01:07:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.896 01:07:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:00.896 01:07:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:00.896 01:07:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:00.896 01:07:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:00.896 01:07:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:00.896 /dev/nbd0 00:07:00.896 01:07:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:00.896 01:07:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:00.896 01:07:23 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:00.896 01:07:23 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:00.896 01:07:23 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:00.896 01:07:23 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:00.896 01:07:23 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:00.896 01:07:23 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:00.896 01:07:23 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:00.896 01:07:23 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:00.896 01:07:23 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:00.896 1+0 records in 00:07:00.896 1+0 records out 00:07:00.896 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197716 s, 20.7 MB/s 00:07:00.897 01:07:23 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:00.897 01:07:23 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:00.897 01:07:23 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:00.897 01:07:23 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:00.897 01:07:23 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:00.897 01:07:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:00.897 01:07:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:00.897 01:07:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:01.155 /dev/nbd1 00:07:01.155 01:07:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:01.155 01:07:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:01.155 01:07:23 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:01.155 01:07:23 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:01.155 01:07:23 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:01.155 01:07:23 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:01.155 01:07:23 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:01.155 01:07:23 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:01.155 01:07:23 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:01.155 01:07:23 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:01.155 01:07:23 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:01.155 1+0 records in 00:07:01.155 1+0 records out 00:07:01.155 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000183002 s, 22.4 MB/s 00:07:01.155 01:07:23 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:01.155 01:07:23 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:01.155 01:07:23 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:01.155 01:07:23 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:01.155 01:07:23 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:01.155 01:07:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:01.155 01:07:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:01.155 01:07:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:01.155 01:07:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.155 01:07:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:01.415 01:07:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:01.415 { 00:07:01.415 "nbd_device": "/dev/nbd0", 00:07:01.415 "bdev_name": "Malloc0" 00:07:01.415 }, 00:07:01.415 { 00:07:01.415 "nbd_device": "/dev/nbd1", 00:07:01.415 "bdev_name": "Malloc1" 00:07:01.415 } 00:07:01.415 ]' 00:07:01.415 01:07:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:01.415 { 00:07:01.415 "nbd_device": "/dev/nbd0", 00:07:01.415 "bdev_name": "Malloc0" 00:07:01.415 }, 00:07:01.415 { 00:07:01.415 "nbd_device": "/dev/nbd1", 00:07:01.415 "bdev_name": "Malloc1" 00:07:01.415 } 00:07:01.415 ]' 00:07:01.415 01:07:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:01.415 01:07:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:01.415 /dev/nbd1' 00:07:01.415 01:07:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:01.415 /dev/nbd1' 00:07:01.415 01:07:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:01.415 01:07:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:01.415 01:07:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:01.415 01:07:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:01.415 01:07:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:01.415 01:07:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:01.415 01:07:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.415 01:07:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:01.415 01:07:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:01.415 01:07:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:01.415 01:07:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:01.415 01:07:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:01.415 256+0 records in 00:07:01.415 256+0 records out 00:07:01.415 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103566 s, 101 MB/s 00:07:01.415 01:07:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:01.415 01:07:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:01.415 256+0 records in 00:07:01.415 256+0 records out 00:07:01.415 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141297 s, 74.2 MB/s 00:07:01.415 01:07:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:01.415 01:07:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:01.415 256+0 records in 00:07:01.415 256+0 records out 00:07:01.415 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0154892 s, 67.7 MB/s 00:07:01.415 01:07:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:01.415 01:07:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.415 01:07:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:01.415 01:07:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:01.415 01:07:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:01.415 01:07:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:01.415 01:07:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:01.415 01:07:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:01.415 01:07:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:01.415 01:07:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:01.415 01:07:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:01.415 01:07:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:01.415 01:07:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:01.415 01:07:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.415 01:07:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.415 01:07:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:01.415 01:07:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:01.415 01:07:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.415 01:07:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:01.675 01:07:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:01.675 01:07:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:01.675 01:07:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:01.675 01:07:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.675 01:07:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.675 01:07:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:01.675 01:07:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:01.675 01:07:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.675 01:07:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.675 01:07:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:01.934 01:07:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:01.934 01:07:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:01.934 01:07:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:01.934 01:07:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.934 01:07:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.934 01:07:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:01.934 01:07:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:01.934 01:07:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.934 01:07:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:01.934 01:07:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.934 01:07:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:01.934 01:07:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:02.195 01:07:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:02.195 01:07:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:02.195 01:07:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:02.195 01:07:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:02.195 01:07:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:02.195 01:07:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:02.195 01:07:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:02.195 01:07:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:02.195 01:07:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:02.195 01:07:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:02.195 01:07:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:02.195 01:07:24 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:02.195 01:07:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:02.455 [2024-07-25 01:07:24.851199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:02.455 [2024-07-25 01:07:24.918972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.455 [2024-07-25 01:07:24.918974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.714 [2024-07-25 01:07:24.959847] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:02.714 [2024-07-25 01:07:24.959887] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:05.251 01:07:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:05.251 01:07:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:05.251 spdk_app_start Round 1 00:07:05.251 01:07:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 725642 /var/tmp/spdk-nbd.sock 00:07:05.251 01:07:27 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 725642 ']' 00:07:05.251 01:07:27 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:05.251 01:07:27 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:05.251 01:07:27 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:05.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:05.251 01:07:27 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:05.251 01:07:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:05.511 01:07:27 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:05.511 01:07:27 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:05.511 01:07:27 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:05.771 Malloc0 00:07:05.771 01:07:28 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:05.771 Malloc1 00:07:05.771 01:07:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:05.771 01:07:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.771 01:07:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:05.771 01:07:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:05.771 01:07:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.771 01:07:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:05.771 01:07:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:05.771 01:07:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.771 01:07:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:05.771 01:07:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:05.771 01:07:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.771 01:07:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:05.771 01:07:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:05.771 01:07:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:05.771 01:07:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:05.772 01:07:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:06.031 /dev/nbd0 00:07:06.031 01:07:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:06.031 01:07:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:06.031 01:07:28 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:06.031 01:07:28 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:06.031 01:07:28 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:06.031 01:07:28 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:06.031 01:07:28 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:06.031 01:07:28 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:06.031 01:07:28 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:06.031 01:07:28 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:06.031 01:07:28 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:06.031 1+0 records in 00:07:06.031 1+0 records out 00:07:06.031 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000188984 s, 21.7 MB/s 00:07:06.031 01:07:28 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:06.031 01:07:28 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:06.031 01:07:28 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:06.031 01:07:28 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:06.031 01:07:28 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:06.031 01:07:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:06.032 01:07:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:06.032 01:07:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:06.292 /dev/nbd1 00:07:06.292 01:07:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:06.292 01:07:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:06.292 01:07:28 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:06.292 01:07:28 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:06.292 01:07:28 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:06.292 01:07:28 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:06.292 01:07:28 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:06.292 01:07:28 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:06.292 01:07:28 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:06.292 01:07:28 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:06.292 01:07:28 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:06.292 1+0 records in 00:07:06.292 1+0 records out 00:07:06.292 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209141 s, 19.6 MB/s 00:07:06.292 01:07:28 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:06.292 01:07:28 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:06.292 01:07:28 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:06.292 01:07:28 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:06.292 01:07:28 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:06.292 01:07:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:06.292 01:07:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:06.292 01:07:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:06.292 01:07:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.292 01:07:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:06.552 01:07:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:06.553 { 00:07:06.553 "nbd_device": "/dev/nbd0", 00:07:06.553 "bdev_name": "Malloc0" 00:07:06.553 }, 00:07:06.553 { 00:07:06.553 "nbd_device": "/dev/nbd1", 00:07:06.553 "bdev_name": "Malloc1" 00:07:06.553 } 00:07:06.553 ]' 00:07:06.553 01:07:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:06.553 { 00:07:06.553 "nbd_device": "/dev/nbd0", 00:07:06.553 "bdev_name": "Malloc0" 00:07:06.553 }, 00:07:06.553 { 00:07:06.553 "nbd_device": "/dev/nbd1", 00:07:06.553 "bdev_name": "Malloc1" 00:07:06.553 } 00:07:06.553 ]' 00:07:06.553 01:07:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:06.553 01:07:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:06.553 /dev/nbd1' 00:07:06.553 01:07:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:06.553 /dev/nbd1' 00:07:06.553 01:07:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:06.553 01:07:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:06.553 01:07:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:06.553 01:07:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:06.553 01:07:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:06.553 01:07:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:06.553 01:07:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:06.553 01:07:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:06.553 01:07:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:06.553 01:07:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:06.553 01:07:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:06.553 01:07:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:06.553 256+0 records in 00:07:06.553 256+0 records out 00:07:06.553 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103222 s, 102 MB/s 00:07:06.553 01:07:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.553 01:07:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:06.553 256+0 records in 00:07:06.553 256+0 records out 00:07:06.553 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137564 s, 76.2 MB/s 00:07:06.553 01:07:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.553 01:07:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:06.553 256+0 records in 00:07:06.553 256+0 records out 00:07:06.553 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0151001 s, 69.4 MB/s 00:07:06.553 01:07:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:06.553 01:07:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:06.553 01:07:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:06.553 01:07:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:06.553 01:07:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:06.553 01:07:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:06.553 01:07:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:06.553 01:07:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.553 01:07:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:06.553 01:07:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.553 01:07:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:06.553 01:07:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:06.553 01:07:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:06.553 01:07:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.553 01:07:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:06.553 01:07:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:06.553 01:07:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:06.553 01:07:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.553 01:07:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:06.813 01:07:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:06.813 01:07:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:06.813 01:07:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:06.813 01:07:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.813 01:07:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.813 01:07:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:06.813 01:07:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:06.813 01:07:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.813 01:07:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.813 01:07:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:06.813 01:07:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:06.813 01:07:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:06.813 01:07:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:06.813 01:07:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.813 01:07:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.813 01:07:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:06.813 01:07:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:06.813 01:07:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.813 01:07:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:06.813 01:07:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.813 01:07:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:07.072 01:07:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:07.072 01:07:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:07.072 01:07:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:07.072 01:07:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:07.072 01:07:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:07.072 01:07:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:07.072 01:07:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:07.073 01:07:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:07.073 01:07:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:07.073 01:07:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:07.073 01:07:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:07.073 01:07:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:07.073 01:07:29 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:07.332 01:07:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:07.592 [2024-07-25 01:07:29.896280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:07.592 [2024-07-25 01:07:29.964717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.592 [2024-07-25 01:07:29.964719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.592 [2024-07-25 01:07:30.006302] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:07.592 [2024-07-25 01:07:30.006339] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:10.935 01:07:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:10.935 01:07:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:10.935 spdk_app_start Round 2 00:07:10.935 01:07:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 725642 /var/tmp/spdk-nbd.sock 00:07:10.935 01:07:32 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 725642 ']' 00:07:10.935 01:07:32 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:10.935 01:07:32 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:10.935 01:07:32 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:10.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:10.935 01:07:32 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:10.935 01:07:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:10.935 01:07:32 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:10.935 01:07:32 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:10.935 01:07:32 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:10.935 Malloc0 00:07:10.935 01:07:33 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:10.935 Malloc1 00:07:10.935 01:07:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:10.935 01:07:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.935 01:07:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:10.935 01:07:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:10.935 01:07:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:10.935 01:07:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:10.935 01:07:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:10.935 01:07:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.935 01:07:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:10.935 01:07:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:10.935 01:07:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:10.935 01:07:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:10.935 01:07:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:10.935 01:07:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:10.935 01:07:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:10.935 01:07:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:10.935 /dev/nbd0 00:07:10.935 01:07:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:10.935 01:07:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:10.935 01:07:33 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:10.935 01:07:33 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:10.935 01:07:33 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:10.935 01:07:33 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:10.935 01:07:33 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:10.935 01:07:33 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:10.936 01:07:33 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:10.936 01:07:33 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:10.936 01:07:33 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:10.936 1+0 records in 00:07:10.936 1+0 records out 00:07:10.936 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196187 s, 20.9 MB/s 00:07:10.936 01:07:33 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:11.195 01:07:33 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:11.195 01:07:33 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:11.195 01:07:33 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:11.195 01:07:33 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:11.195 01:07:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:11.195 01:07:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:11.195 01:07:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:11.195 /dev/nbd1 00:07:11.195 01:07:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:11.195 01:07:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:11.195 01:07:33 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:11.195 01:07:33 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:11.195 01:07:33 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:11.195 01:07:33 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:11.195 01:07:33 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:11.195 01:07:33 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:11.195 01:07:33 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:11.195 01:07:33 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:11.195 01:07:33 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:11.195 1+0 records in 00:07:11.195 1+0 records out 00:07:11.195 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195019 s, 21.0 MB/s 00:07:11.195 01:07:33 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:11.195 01:07:33 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:11.195 01:07:33 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:11.195 01:07:33 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:11.195 01:07:33 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:11.195 01:07:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:11.195 01:07:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:11.195 01:07:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:11.195 01:07:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.195 01:07:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:11.455 01:07:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:11.455 { 00:07:11.455 "nbd_device": "/dev/nbd0", 00:07:11.455 "bdev_name": "Malloc0" 00:07:11.455 }, 00:07:11.455 { 00:07:11.455 "nbd_device": "/dev/nbd1", 00:07:11.455 "bdev_name": "Malloc1" 00:07:11.455 } 00:07:11.455 ]' 00:07:11.455 01:07:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:11.455 { 00:07:11.455 "nbd_device": "/dev/nbd0", 00:07:11.455 "bdev_name": "Malloc0" 00:07:11.455 }, 00:07:11.455 { 00:07:11.455 "nbd_device": "/dev/nbd1", 00:07:11.455 "bdev_name": "Malloc1" 00:07:11.455 } 00:07:11.455 ]' 00:07:11.455 01:07:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:11.455 01:07:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:11.455 /dev/nbd1' 00:07:11.455 01:07:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:11.455 /dev/nbd1' 00:07:11.455 01:07:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:11.455 01:07:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:11.455 01:07:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:11.455 01:07:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:11.455 01:07:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:11.455 01:07:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:11.455 01:07:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.455 01:07:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:11.455 01:07:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:11.455 01:07:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:11.455 01:07:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:11.455 01:07:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:11.455 256+0 records in 00:07:11.455 256+0 records out 00:07:11.455 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010393 s, 101 MB/s 00:07:11.455 01:07:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:11.455 01:07:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:11.455 256+0 records in 00:07:11.455 256+0 records out 00:07:11.455 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130771 s, 80.2 MB/s 00:07:11.455 01:07:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:11.455 01:07:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:11.455 256+0 records in 00:07:11.455 256+0 records out 00:07:11.455 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139454 s, 75.2 MB/s 00:07:11.455 01:07:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:11.455 01:07:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.455 01:07:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:11.455 01:07:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:11.455 01:07:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:11.455 01:07:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:11.455 01:07:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:11.455 01:07:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.455 01:07:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:11.455 01:07:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.455 01:07:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:11.455 01:07:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:11.455 01:07:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:11.455 01:07:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.455 01:07:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.716 01:07:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:11.716 01:07:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:11.716 01:07:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.716 01:07:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:11.716 01:07:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:11.716 01:07:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:11.716 01:07:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:11.716 01:07:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.716 01:07:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.716 01:07:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:11.716 01:07:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:11.716 01:07:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.716 01:07:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.716 01:07:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:11.976 01:07:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:11.976 01:07:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:11.976 01:07:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:11.976 01:07:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.976 01:07:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.976 01:07:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:11.976 01:07:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:11.976 01:07:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.976 01:07:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:11.976 01:07:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.976 01:07:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:12.237 01:07:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:12.237 01:07:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:12.237 01:07:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:12.237 01:07:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:12.237 01:07:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:12.237 01:07:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:12.237 01:07:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:12.237 01:07:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:12.237 01:07:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:12.237 01:07:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:12.237 01:07:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:12.237 01:07:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:12.237 01:07:34 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:12.237 01:07:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:12.498 [2024-07-25 01:07:34.908696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:12.498 [2024-07-25 01:07:34.976145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.498 [2024-07-25 01:07:34.976148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.815 [2024-07-25 01:07:35.016975] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:12.815 [2024-07-25 01:07:35.017017] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:15.393 01:07:37 event.app_repeat -- event/event.sh@38 -- # waitforlisten 725642 /var/tmp/spdk-nbd.sock 00:07:15.393 01:07:37 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 725642 ']' 00:07:15.393 01:07:37 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:15.393 01:07:37 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:15.393 01:07:37 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:15.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:15.393 01:07:37 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:15.393 01:07:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:15.653 01:07:37 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:15.654 01:07:37 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:15.654 01:07:37 event.app_repeat -- event/event.sh@39 -- # killprocess 725642 00:07:15.654 01:07:37 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 725642 ']' 00:07:15.654 01:07:37 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 725642 00:07:15.654 01:07:37 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:07:15.654 01:07:37 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:15.654 01:07:37 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 725642 00:07:15.654 01:07:37 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:15.654 01:07:37 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:15.654 01:07:37 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 725642' 00:07:15.654 killing process with pid 725642 00:07:15.654 01:07:37 event.app_repeat -- common/autotest_common.sh@967 -- # kill 725642 00:07:15.654 01:07:37 event.app_repeat -- common/autotest_common.sh@972 -- # wait 725642 00:07:15.654 spdk_app_start is called in Round 0. 00:07:15.654 Shutdown signal received, stop current app iteration 00:07:15.654 Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 reinitialization... 00:07:15.654 spdk_app_start is called in Round 1. 00:07:15.654 Shutdown signal received, stop current app iteration 00:07:15.654 Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 reinitialization... 00:07:15.654 spdk_app_start is called in Round 2. 00:07:15.654 Shutdown signal received, stop current app iteration 00:07:15.654 Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 reinitialization... 00:07:15.654 spdk_app_start is called in Round 3. 00:07:15.654 Shutdown signal received, stop current app iteration 00:07:15.654 01:07:38 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:15.654 01:07:38 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:15.654 00:07:15.654 real 0m16.189s 00:07:15.654 user 0m35.188s 00:07:15.654 sys 0m2.344s 00:07:15.654 01:07:38 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.654 01:07:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:15.654 ************************************ 00:07:15.654 END TEST app_repeat 00:07:15.654 ************************************ 00:07:15.915 01:07:38 event -- common/autotest_common.sh@1142 -- # return 0 00:07:15.915 01:07:38 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:15.915 01:07:38 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:15.915 01:07:38 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:15.915 01:07:38 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.915 01:07:38 event -- common/autotest_common.sh@10 -- # set +x 00:07:15.915 ************************************ 00:07:15.915 START TEST cpu_locks 00:07:15.915 ************************************ 00:07:15.915 01:07:38 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:15.915 * Looking for test storage... 00:07:15.915 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:15.915 01:07:38 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:15.915 01:07:38 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:15.915 01:07:38 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:15.915 01:07:38 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:15.915 01:07:38 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:15.915 01:07:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.915 01:07:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:15.915 ************************************ 00:07:15.915 START TEST default_locks 00:07:15.915 ************************************ 00:07:15.915 01:07:38 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:07:15.915 01:07:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=728624 00:07:15.915 01:07:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 728624 00:07:15.915 01:07:38 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 728624 ']' 00:07:15.915 01:07:38 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.915 01:07:38 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:15.915 01:07:38 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.915 01:07:38 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:15.915 01:07:38 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:15.915 01:07:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:15.915 [2024-07-25 01:07:38.348559] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:15.915 [2024-07-25 01:07:38.348603] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid728624 ] 00:07:15.915 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.915 [2024-07-25 01:07:38.403703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.176 [2024-07-25 01:07:38.476630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.746 01:07:39 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:16.746 01:07:39 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:07:16.746 01:07:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 728624 00:07:16.746 01:07:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 728624 00:07:16.746 01:07:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:17.316 lslocks: write error 00:07:17.316 01:07:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 728624 00:07:17.316 01:07:39 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 728624 ']' 00:07:17.316 01:07:39 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 728624 00:07:17.316 01:07:39 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:07:17.316 01:07:39 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:17.316 01:07:39 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 728624 00:07:17.316 01:07:39 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:17.316 01:07:39 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:17.316 01:07:39 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 728624' 00:07:17.316 killing process with pid 728624 00:07:17.316 01:07:39 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 728624 00:07:17.316 01:07:39 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 728624 00:07:17.577 01:07:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 728624 00:07:17.577 01:07:39 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:07:17.577 01:07:39 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 728624 00:07:17.577 01:07:39 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:17.577 01:07:39 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.577 01:07:39 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:17.577 01:07:39 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.577 01:07:39 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 728624 00:07:17.577 01:07:39 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 728624 ']' 00:07:17.577 01:07:39 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.577 01:07:39 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:17.577 01:07:39 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.577 01:07:39 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:17.577 01:07:39 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:17.577 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (728624) - No such process 00:07:17.577 ERROR: process (pid: 728624) is no longer running 00:07:17.577 01:07:39 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:17.577 01:07:39 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:07:17.577 01:07:39 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:07:17.577 01:07:39 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:17.577 01:07:39 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:17.577 01:07:39 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:17.577 01:07:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:17.577 01:07:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:17.577 01:07:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:17.577 01:07:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:17.577 00:07:17.577 real 0m1.675s 00:07:17.577 user 0m1.769s 00:07:17.577 sys 0m0.539s 00:07:17.577 01:07:39 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.577 01:07:39 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:17.577 ************************************ 00:07:17.577 END TEST default_locks 00:07:17.577 ************************************ 00:07:17.577 01:07:40 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:17.577 01:07:40 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:17.577 01:07:40 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:17.577 01:07:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.577 01:07:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:17.577 ************************************ 00:07:17.577 START TEST default_locks_via_rpc 00:07:17.577 ************************************ 00:07:17.577 01:07:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:07:17.577 01:07:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=728897 00:07:17.577 01:07:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 728897 00:07:17.577 01:07:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:17.577 01:07:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 728897 ']' 00:07:17.577 01:07:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.577 01:07:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:17.577 01:07:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.577 01:07:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:17.577 01:07:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.837 [2024-07-25 01:07:40.096304] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:17.837 [2024-07-25 01:07:40.096348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid728897 ] 00:07:17.837 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.837 [2024-07-25 01:07:40.148503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.837 [2024-07-25 01:07:40.227091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.406 01:07:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:18.665 01:07:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:18.665 01:07:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:18.665 01:07:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.665 01:07:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.665 01:07:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.665 01:07:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:18.665 01:07:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:18.665 01:07:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:18.665 01:07:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:18.665 01:07:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:18.665 01:07:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.665 01:07:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.665 01:07:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.665 01:07:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 728897 00:07:18.666 01:07:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 728897 00:07:18.666 01:07:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:18.925 01:07:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 728897 00:07:18.925 01:07:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 728897 ']' 00:07:18.925 01:07:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 728897 00:07:18.925 01:07:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:07:18.925 01:07:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:18.925 01:07:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 728897 00:07:18.925 01:07:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:18.925 01:07:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:18.925 01:07:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 728897' 00:07:18.925 killing process with pid 728897 00:07:18.925 01:07:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 728897 00:07:18.925 01:07:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 728897 00:07:19.185 00:07:19.185 real 0m1.628s 00:07:19.185 user 0m1.739s 00:07:19.185 sys 0m0.522s 00:07:19.185 01:07:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.185 01:07:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.185 ************************************ 00:07:19.185 END TEST default_locks_via_rpc 00:07:19.185 ************************************ 00:07:19.445 01:07:41 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:19.445 01:07:41 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:19.445 01:07:41 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:19.445 01:07:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.445 01:07:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:19.445 ************************************ 00:07:19.445 START TEST non_locking_app_on_locked_coremask 00:07:19.445 ************************************ 00:07:19.445 01:07:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:07:19.445 01:07:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:19.445 01:07:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=729160 00:07:19.445 01:07:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 729160 /var/tmp/spdk.sock 00:07:19.445 01:07:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 729160 ']' 00:07:19.445 01:07:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.445 01:07:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:19.445 01:07:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.445 01:07:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:19.445 01:07:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:19.445 [2024-07-25 01:07:41.776245] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:19.445 [2024-07-25 01:07:41.776283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid729160 ] 00:07:19.445 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.445 [2024-07-25 01:07:41.824723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.445 [2024-07-25 01:07:41.903362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.384 01:07:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:20.384 01:07:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:20.384 01:07:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:20.384 01:07:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=729390 00:07:20.384 01:07:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 729390 /var/tmp/spdk2.sock 00:07:20.384 01:07:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 729390 ']' 00:07:20.384 01:07:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:20.384 01:07:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:20.384 01:07:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:20.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:20.384 01:07:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:20.384 01:07:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:20.384 [2024-07-25 01:07:42.621834] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:20.384 [2024-07-25 01:07:42.621880] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid729390 ] 00:07:20.384 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.384 [2024-07-25 01:07:42.691298] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:20.384 [2024-07-25 01:07:42.691320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.384 [2024-07-25 01:07:42.835981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.954 01:07:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:20.954 01:07:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:20.954 01:07:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 729160 00:07:20.954 01:07:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 729160 00:07:20.954 01:07:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:21.522 lslocks: write error 00:07:21.522 01:07:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 729160 00:07:21.522 01:07:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 729160 ']' 00:07:21.522 01:07:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 729160 00:07:21.522 01:07:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:21.522 01:07:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:21.522 01:07:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 729160 00:07:21.781 01:07:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:21.781 01:07:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:21.781 01:07:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 729160' 00:07:21.781 killing process with pid 729160 00:07:21.782 01:07:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 729160 00:07:21.782 01:07:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 729160 00:07:22.350 01:07:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 729390 00:07:22.350 01:07:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 729390 ']' 00:07:22.350 01:07:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 729390 00:07:22.350 01:07:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:22.350 01:07:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:22.350 01:07:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 729390 00:07:22.350 01:07:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:22.350 01:07:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:22.350 01:07:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 729390' 00:07:22.350 killing process with pid 729390 00:07:22.350 01:07:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 729390 00:07:22.350 01:07:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 729390 00:07:22.610 00:07:22.610 real 0m3.246s 00:07:22.610 user 0m3.500s 00:07:22.610 sys 0m0.902s 00:07:22.610 01:07:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.610 01:07:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:22.610 ************************************ 00:07:22.610 END TEST non_locking_app_on_locked_coremask 00:07:22.610 ************************************ 00:07:22.610 01:07:45 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:22.610 01:07:45 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:22.610 01:07:45 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:22.610 01:07:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.610 01:07:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:22.610 ************************************ 00:07:22.610 START TEST locking_app_on_unlocked_coremask 00:07:22.610 ************************************ 00:07:22.610 01:07:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:07:22.610 01:07:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=729878 00:07:22.610 01:07:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 729878 /var/tmp/spdk.sock 00:07:22.610 01:07:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:22.610 01:07:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 729878 ']' 00:07:22.610 01:07:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.610 01:07:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:22.610 01:07:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.610 01:07:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:22.610 01:07:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:22.610 [2024-07-25 01:07:45.097132] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:22.610 [2024-07-25 01:07:45.097171] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid729878 ] 00:07:22.869 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.870 [2024-07-25 01:07:45.149502] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:22.870 [2024-07-25 01:07:45.149525] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.870 [2024-07-25 01:07:45.228752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.440 01:07:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:23.440 01:07:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:23.440 01:07:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=729894 00:07:23.440 01:07:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 729894 /var/tmp/spdk2.sock 00:07:23.440 01:07:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:23.440 01:07:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 729894 ']' 00:07:23.440 01:07:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:23.440 01:07:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:23.440 01:07:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:23.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:23.440 01:07:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:23.440 01:07:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:23.700 [2024-07-25 01:07:45.956377] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:23.701 [2024-07-25 01:07:45.956426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid729894 ] 00:07:23.701 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.701 [2024-07-25 01:07:46.033443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.701 [2024-07-25 01:07:46.177871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.641 01:07:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:24.641 01:07:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:24.641 01:07:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 729894 00:07:24.641 01:07:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 729894 00:07:24.641 01:07:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:24.901 lslocks: write error 00:07:24.901 01:07:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 729878 00:07:24.901 01:07:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 729878 ']' 00:07:24.901 01:07:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 729878 00:07:24.901 01:07:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:24.901 01:07:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:24.901 01:07:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 729878 00:07:24.901 01:07:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:24.901 01:07:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:24.901 01:07:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 729878' 00:07:24.901 killing process with pid 729878 00:07:24.901 01:07:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 729878 00:07:24.901 01:07:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 729878 00:07:25.471 01:07:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 729894 00:07:25.471 01:07:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 729894 ']' 00:07:25.471 01:07:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 729894 00:07:25.471 01:07:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:25.472 01:07:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:25.472 01:07:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 729894 00:07:25.472 01:07:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:25.472 01:07:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:25.472 01:07:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 729894' 00:07:25.472 killing process with pid 729894 00:07:25.472 01:07:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 729894 00:07:25.472 01:07:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 729894 00:07:25.732 00:07:25.732 real 0m3.099s 00:07:25.732 user 0m3.354s 00:07:25.732 sys 0m0.865s 00:07:25.732 01:07:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.732 01:07:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:25.732 ************************************ 00:07:25.732 END TEST locking_app_on_unlocked_coremask 00:07:25.732 ************************************ 00:07:25.732 01:07:48 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:25.732 01:07:48 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:25.732 01:07:48 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:25.732 01:07:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.732 01:07:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:25.732 ************************************ 00:07:25.732 START TEST locking_app_on_locked_coremask 00:07:25.732 ************************************ 00:07:25.732 01:07:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:07:25.732 01:07:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=730380 00:07:25.732 01:07:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 730380 /var/tmp/spdk.sock 00:07:25.732 01:07:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:25.732 01:07:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 730380 ']' 00:07:25.732 01:07:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.732 01:07:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:25.732 01:07:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.732 01:07:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:25.732 01:07:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:25.992 [2024-07-25 01:07:48.256816] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:25.992 [2024-07-25 01:07:48.256857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid730380 ] 00:07:25.992 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.992 [2024-07-25 01:07:48.308633] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.992 [2024-07-25 01:07:48.388209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.562 01:07:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:26.562 01:07:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:26.822 01:07:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=730592 00:07:26.822 01:07:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 730592 /var/tmp/spdk2.sock 00:07:26.822 01:07:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:26.822 01:07:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:26.822 01:07:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 730592 /var/tmp/spdk2.sock 00:07:26.822 01:07:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:26.822 01:07:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:26.822 01:07:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:26.822 01:07:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:26.822 01:07:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 730592 /var/tmp/spdk2.sock 00:07:26.822 01:07:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 730592 ']' 00:07:26.822 01:07:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:26.822 01:07:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:26.822 01:07:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:26.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:26.822 01:07:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:26.822 01:07:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:26.822 [2024-07-25 01:07:49.105726] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:26.822 [2024-07-25 01:07:49.105776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid730592 ] 00:07:26.822 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.822 [2024-07-25 01:07:49.182325] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 730380 has claimed it. 00:07:26.822 [2024-07-25 01:07:49.182357] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:27.392 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (730592) - No such process 00:07:27.392 ERROR: process (pid: 730592) is no longer running 00:07:27.392 01:07:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:27.392 01:07:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:27.392 01:07:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:27.392 01:07:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:27.392 01:07:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:27.392 01:07:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:27.392 01:07:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 730380 00:07:27.392 01:07:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 730380 00:07:27.392 01:07:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:27.652 lslocks: write error 00:07:27.652 01:07:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 730380 00:07:27.653 01:07:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 730380 ']' 00:07:27.653 01:07:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 730380 00:07:27.653 01:07:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:27.653 01:07:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:27.653 01:07:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 730380 00:07:27.653 01:07:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:27.653 01:07:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:27.653 01:07:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 730380' 00:07:27.653 killing process with pid 730380 00:07:27.653 01:07:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 730380 00:07:27.653 01:07:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 730380 00:07:27.913 00:07:27.913 real 0m2.148s 00:07:27.913 user 0m2.393s 00:07:27.913 sys 0m0.542s 00:07:27.913 01:07:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.913 01:07:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:27.913 ************************************ 00:07:27.913 END TEST locking_app_on_locked_coremask 00:07:27.913 ************************************ 00:07:27.913 01:07:50 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:27.913 01:07:50 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:27.913 01:07:50 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:27.913 01:07:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.913 01:07:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:28.173 ************************************ 00:07:28.174 START TEST locking_overlapped_coremask 00:07:28.174 ************************************ 00:07:28.174 01:07:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:07:28.174 01:07:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=730848 00:07:28.174 01:07:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 730848 /var/tmp/spdk.sock 00:07:28.174 01:07:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:28.174 01:07:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 730848 ']' 00:07:28.174 01:07:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.174 01:07:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:28.174 01:07:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.174 01:07:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:28.174 01:07:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:28.174 [2024-07-25 01:07:50.473445] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:28.174 [2024-07-25 01:07:50.473487] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid730848 ] 00:07:28.174 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.174 [2024-07-25 01:07:50.527024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:28.174 [2024-07-25 01:07:50.599306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.174 [2024-07-25 01:07:50.599405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.174 [2024-07-25 01:07:50.599407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.805 01:07:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:28.805 01:07:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:28.805 01:07:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:28.805 01:07:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=730889 00:07:28.805 01:07:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 730889 /var/tmp/spdk2.sock 00:07:28.805 01:07:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:28.805 01:07:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 730889 /var/tmp/spdk2.sock 00:07:28.805 01:07:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:29.066 01:07:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:29.066 01:07:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:29.066 01:07:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:29.066 01:07:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 730889 /var/tmp/spdk2.sock 00:07:29.066 01:07:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 730889 ']' 00:07:29.066 01:07:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:29.066 01:07:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:29.066 01:07:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:29.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:29.066 01:07:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:29.066 01:07:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:29.066 [2024-07-25 01:07:51.306374] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:29.066 [2024-07-25 01:07:51.306423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid730889 ] 00:07:29.066 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.066 [2024-07-25 01:07:51.382949] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 730848 has claimed it. 00:07:29.066 [2024-07-25 01:07:51.382988] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:29.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (730889) - No such process 00:07:29.637 ERROR: process (pid: 730889) is no longer running 00:07:29.637 01:07:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:29.637 01:07:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:29.637 01:07:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:29.637 01:07:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:29.637 01:07:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:29.637 01:07:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:29.637 01:07:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:29.637 01:07:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:29.637 01:07:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:29.637 01:07:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:29.637 01:07:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 730848 00:07:29.637 01:07:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 730848 ']' 00:07:29.637 01:07:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 730848 00:07:29.637 01:07:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:07:29.637 01:07:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:29.637 01:07:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 730848 00:07:29.637 01:07:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:29.637 01:07:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:29.637 01:07:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 730848' 00:07:29.637 killing process with pid 730848 00:07:29.637 01:07:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 730848 00:07:29.637 01:07:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 730848 00:07:29.897 00:07:29.897 real 0m1.876s 00:07:29.897 user 0m5.281s 00:07:29.897 sys 0m0.418s 00:07:29.897 01:07:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.897 01:07:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:29.897 ************************************ 00:07:29.897 END TEST locking_overlapped_coremask 00:07:29.897 ************************************ 00:07:29.897 01:07:52 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:29.897 01:07:52 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:29.897 01:07:52 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:29.897 01:07:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.897 01:07:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:29.897 ************************************ 00:07:29.897 START TEST locking_overlapped_coremask_via_rpc 00:07:29.897 ************************************ 00:07:29.897 01:07:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:07:29.897 01:07:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=731148 00:07:29.897 01:07:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 731148 /var/tmp/spdk.sock 00:07:29.897 01:07:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:29.897 01:07:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 731148 ']' 00:07:29.897 01:07:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.897 01:07:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:29.898 01:07:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.898 01:07:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:29.898 01:07:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.158 [2024-07-25 01:07:52.412487] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:30.158 [2024-07-25 01:07:52.412526] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid731148 ] 00:07:30.158 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.158 [2024-07-25 01:07:52.464108] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:30.158 [2024-07-25 01:07:52.464130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:30.158 [2024-07-25 01:07:52.545392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.158 [2024-07-25 01:07:52.545482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.158 [2024-07-25 01:07:52.545480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.098 01:07:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:31.098 01:07:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:31.098 01:07:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=731376 00:07:31.098 01:07:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 731376 /var/tmp/spdk2.sock 00:07:31.098 01:07:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:31.098 01:07:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 731376 ']' 00:07:31.098 01:07:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:31.098 01:07:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:31.098 01:07:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:31.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:31.098 01:07:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:31.098 01:07:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.098 [2024-07-25 01:07:53.276069] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:31.098 [2024-07-25 01:07:53.276121] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid731376 ] 00:07:31.098 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.098 [2024-07-25 01:07:53.352880] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:31.098 [2024-07-25 01:07:53.352904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:31.098 [2024-07-25 01:07:53.498763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:31.098 [2024-07-25 01:07:53.502092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.098 [2024-07-25 01:07:53.502093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:31.668 01:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:31.668 01:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:31.668 01:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:31.668 01:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.668 01:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.668 01:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.668 01:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:31.668 01:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:31.668 01:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:31.668 01:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:31.668 01:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:31.668 01:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:31.668 01:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:31.668 01:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:31.668 01:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.668 01:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.668 [2024-07-25 01:07:54.103114] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 731148 has claimed it. 00:07:31.668 request: 00:07:31.668 { 00:07:31.668 "method": "framework_enable_cpumask_locks", 00:07:31.668 "req_id": 1 00:07:31.668 } 00:07:31.668 Got JSON-RPC error response 00:07:31.668 response: 00:07:31.668 { 00:07:31.668 "code": -32603, 00:07:31.668 "message": "Failed to claim CPU core: 2" 00:07:31.668 } 00:07:31.668 01:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:31.668 01:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:31.668 01:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:31.668 01:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:31.668 01:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:31.668 01:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 731148 /var/tmp/spdk.sock 00:07:31.668 01:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 731148 ']' 00:07:31.668 01:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.668 01:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:31.668 01:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.668 01:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:31.668 01:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.929 01:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:31.929 01:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:31.929 01:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 731376 /var/tmp/spdk2.sock 00:07:31.929 01:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 731376 ']' 00:07:31.929 01:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:31.929 01:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:31.929 01:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:31.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:31.929 01:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:31.929 01:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.189 01:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:32.189 01:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:32.189 01:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:32.189 01:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:32.189 01:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:32.189 01:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:32.189 00:07:32.189 real 0m2.134s 00:07:32.189 user 0m0.881s 00:07:32.189 sys 0m0.177s 00:07:32.189 01:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.190 01:07:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.190 ************************************ 00:07:32.190 END TEST locking_overlapped_coremask_via_rpc 00:07:32.190 ************************************ 00:07:32.190 01:07:54 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:32.190 01:07:54 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:32.190 01:07:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 731148 ]] 00:07:32.190 01:07:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 731148 00:07:32.190 01:07:54 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 731148 ']' 00:07:32.190 01:07:54 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 731148 00:07:32.190 01:07:54 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:32.190 01:07:54 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:32.190 01:07:54 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 731148 00:07:32.190 01:07:54 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:32.190 01:07:54 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:32.190 01:07:54 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 731148' 00:07:32.190 killing process with pid 731148 00:07:32.190 01:07:54 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 731148 00:07:32.190 01:07:54 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 731148 00:07:32.449 01:07:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 731376 ]] 00:07:32.449 01:07:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 731376 00:07:32.449 01:07:54 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 731376 ']' 00:07:32.449 01:07:54 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 731376 00:07:32.449 01:07:54 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:32.449 01:07:54 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:32.449 01:07:54 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 731376 00:07:32.449 01:07:54 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:32.449 01:07:54 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:32.449 01:07:54 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 731376' 00:07:32.449 killing process with pid 731376 00:07:32.449 01:07:54 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 731376 00:07:32.449 01:07:54 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 731376 00:07:33.020 01:07:55 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:33.020 01:07:55 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:33.020 01:07:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 731148 ]] 00:07:33.020 01:07:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 731148 00:07:33.020 01:07:55 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 731148 ']' 00:07:33.020 01:07:55 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 731148 00:07:33.020 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (731148) - No such process 00:07:33.020 01:07:55 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 731148 is not found' 00:07:33.020 Process with pid 731148 is not found 00:07:33.020 01:07:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 731376 ]] 00:07:33.020 01:07:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 731376 00:07:33.020 01:07:55 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 731376 ']' 00:07:33.020 01:07:55 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 731376 00:07:33.020 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (731376) - No such process 00:07:33.020 01:07:55 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 731376 is not found' 00:07:33.020 Process with pid 731376 is not found 00:07:33.020 01:07:55 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:33.020 00:07:33.020 real 0m17.077s 00:07:33.020 user 0m29.583s 00:07:33.020 sys 0m4.841s 00:07:33.020 01:07:55 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.020 01:07:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:33.020 ************************************ 00:07:33.020 END TEST cpu_locks 00:07:33.020 ************************************ 00:07:33.020 01:07:55 event -- common/autotest_common.sh@1142 -- # return 0 00:07:33.020 00:07:33.020 real 0m42.169s 00:07:33.020 user 1m20.682s 00:07:33.020 sys 0m8.092s 00:07:33.020 01:07:55 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.020 01:07:55 event -- common/autotest_common.sh@10 -- # set +x 00:07:33.020 ************************************ 00:07:33.020 END TEST event 00:07:33.020 ************************************ 00:07:33.020 01:07:55 -- common/autotest_common.sh@1142 -- # return 0 00:07:33.020 01:07:55 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:33.020 01:07:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:33.020 01:07:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.020 01:07:55 -- common/autotest_common.sh@10 -- # set +x 00:07:33.020 ************************************ 00:07:33.020 START TEST thread 00:07:33.020 ************************************ 00:07:33.020 01:07:55 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:33.020 * Looking for test storage... 00:07:33.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:33.020 01:07:55 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:33.020 01:07:55 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:33.020 01:07:55 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.020 01:07:55 thread -- common/autotest_common.sh@10 -- # set +x 00:07:33.020 ************************************ 00:07:33.020 START TEST thread_poller_perf 00:07:33.020 ************************************ 00:07:33.020 01:07:55 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:33.020 [2024-07-25 01:07:55.490963] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:33.020 [2024-07-25 01:07:55.491032] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid731716 ] 00:07:33.280 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.280 [2024-07-25 01:07:55.549298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.280 [2024-07-25 01:07:55.623703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.280 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:34.218 ====================================== 00:07:34.218 busy:2308241286 (cyc) 00:07:34.218 total_run_count: 415000 00:07:34.218 tsc_hz: 2300000000 (cyc) 00:07:34.218 ====================================== 00:07:34.218 poller_cost: 5562 (cyc), 2418 (nsec) 00:07:34.218 00:07:34.218 real 0m1.227s 00:07:34.218 user 0m1.149s 00:07:34.218 sys 0m0.074s 00:07:34.218 01:07:56 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.218 01:07:56 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:34.218 ************************************ 00:07:34.218 END TEST thread_poller_perf 00:07:34.218 ************************************ 00:07:34.478 01:07:56 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:34.478 01:07:56 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:34.478 01:07:56 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:34.478 01:07:56 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.478 01:07:56 thread -- common/autotest_common.sh@10 -- # set +x 00:07:34.478 ************************************ 00:07:34.478 START TEST thread_poller_perf 00:07:34.478 ************************************ 00:07:34.478 01:07:56 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:34.478 [2024-07-25 01:07:56.778333] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:34.478 [2024-07-25 01:07:56.778399] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid731963 ] 00:07:34.478 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.478 [2024-07-25 01:07:56.834496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.478 [2024-07-25 01:07:56.905698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.478 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:35.868 ====================================== 00:07:35.868 busy:2301335272 (cyc) 00:07:35.868 total_run_count: 5387000 00:07:35.868 tsc_hz: 2300000000 (cyc) 00:07:35.868 ====================================== 00:07:35.868 poller_cost: 427 (cyc), 185 (nsec) 00:07:35.868 00:07:35.868 real 0m1.219s 00:07:35.868 user 0m1.146s 00:07:35.868 sys 0m0.069s 00:07:35.868 01:07:57 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.868 01:07:57 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:35.868 ************************************ 00:07:35.868 END TEST thread_poller_perf 00:07:35.868 ************************************ 00:07:35.868 01:07:58 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:35.868 01:07:58 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:35.868 00:07:35.868 real 0m2.646s 00:07:35.868 user 0m2.378s 00:07:35.868 sys 0m0.272s 00:07:35.868 01:07:58 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.868 01:07:58 thread -- common/autotest_common.sh@10 -- # set +x 00:07:35.868 ************************************ 00:07:35.868 END TEST thread 00:07:35.868 ************************************ 00:07:35.868 01:07:58 -- common/autotest_common.sh@1142 -- # return 0 00:07:35.868 01:07:58 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:35.868 01:07:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:35.868 01:07:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.868 01:07:58 -- common/autotest_common.sh@10 -- # set +x 00:07:35.868 ************************************ 00:07:35.868 START TEST accel 00:07:35.868 ************************************ 00:07:35.868 01:07:58 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:35.868 * Looking for test storage... 00:07:35.868 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:35.868 01:07:58 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:35.868 01:07:58 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:35.868 01:07:58 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:35.868 01:07:58 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=732256 00:07:35.868 01:07:58 accel -- accel/accel.sh@63 -- # waitforlisten 732256 00:07:35.868 01:07:58 accel -- common/autotest_common.sh@829 -- # '[' -z 732256 ']' 00:07:35.868 01:07:58 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.868 01:07:58 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:35.868 01:07:58 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.868 01:07:58 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:35.868 01:07:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.868 01:07:58 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:35.868 01:07:58 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:35.868 01:07:58 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.868 01:07:58 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.868 01:07:58 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.868 01:07:58 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.868 01:07:58 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.868 01:07:58 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:35.868 01:07:58 accel -- accel/accel.sh@41 -- # jq -r . 00:07:35.868 [2024-07-25 01:07:58.203050] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:35.868 [2024-07-25 01:07:58.203095] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid732256 ] 00:07:35.868 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.868 [2024-07-25 01:07:58.258434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.868 [2024-07-25 01:07:58.340971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.814 01:07:59 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:36.814 01:07:59 accel -- common/autotest_common.sh@862 -- # return 0 00:07:36.814 01:07:59 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:36.814 01:07:59 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:36.814 01:07:59 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:36.814 01:07:59 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:36.814 01:07:59 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:36.814 01:07:59 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:36.814 01:07:59 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.814 01:07:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:36.814 01:07:59 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:36.814 01:07:59 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.814 01:07:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:36.815 01:07:59 accel -- accel/accel.sh@72 -- # IFS== 00:07:36.815 01:07:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:36.815 01:07:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:36.815 01:07:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:36.815 01:07:59 accel -- accel/accel.sh@72 -- # IFS== 00:07:36.815 01:07:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:36.815 01:07:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:36.815 01:07:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:36.815 01:07:59 accel -- accel/accel.sh@72 -- # IFS== 00:07:36.815 01:07:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:36.815 01:07:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:36.815 01:07:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:36.815 01:07:59 accel -- accel/accel.sh@72 -- # IFS== 00:07:36.815 01:07:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:36.815 01:07:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:36.815 01:07:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:36.815 01:07:59 accel -- accel/accel.sh@72 -- # IFS== 00:07:36.815 01:07:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:36.815 01:07:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:36.815 01:07:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:36.815 01:07:59 accel -- accel/accel.sh@72 -- # IFS== 00:07:36.815 01:07:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:36.815 01:07:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:36.815 01:07:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:36.815 01:07:59 accel -- accel/accel.sh@72 -- # IFS== 00:07:36.815 01:07:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:36.815 01:07:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:36.815 01:07:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:36.815 01:07:59 accel -- accel/accel.sh@72 -- # IFS== 00:07:36.815 01:07:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:36.815 01:07:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:36.815 01:07:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:36.815 01:07:59 accel -- accel/accel.sh@72 -- # IFS== 00:07:36.815 01:07:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:36.815 01:07:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:36.815 01:07:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:36.815 01:07:59 accel -- accel/accel.sh@72 -- # IFS== 00:07:36.815 01:07:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:36.815 01:07:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:36.816 01:07:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:36.816 01:07:59 accel -- accel/accel.sh@72 -- # IFS== 00:07:36.816 01:07:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:36.816 01:07:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:36.816 01:07:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:36.816 01:07:59 accel -- accel/accel.sh@72 -- # IFS== 00:07:36.816 01:07:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:36.816 01:07:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:36.816 01:07:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:36.816 01:07:59 accel -- accel/accel.sh@72 -- # IFS== 00:07:36.816 01:07:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:36.816 01:07:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:36.816 01:07:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:36.816 01:07:59 accel -- accel/accel.sh@72 -- # IFS== 00:07:36.816 01:07:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:36.816 01:07:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:36.816 01:07:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:36.816 01:07:59 accel -- accel/accel.sh@72 -- # IFS== 00:07:36.816 01:07:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:36.816 01:07:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:36.816 01:07:59 accel -- accel/accel.sh@75 -- # killprocess 732256 00:07:36.816 01:07:59 accel -- common/autotest_common.sh@948 -- # '[' -z 732256 ']' 00:07:36.816 01:07:59 accel -- common/autotest_common.sh@952 -- # kill -0 732256 00:07:36.816 01:07:59 accel -- common/autotest_common.sh@953 -- # uname 00:07:36.816 01:07:59 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:36.816 01:07:59 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 732256 00:07:36.816 01:07:59 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:36.816 01:07:59 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:36.816 01:07:59 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 732256' 00:07:36.816 killing process with pid 732256 00:07:36.816 01:07:59 accel -- common/autotest_common.sh@967 -- # kill 732256 00:07:36.816 01:07:59 accel -- common/autotest_common.sh@972 -- # wait 732256 00:07:37.079 01:07:59 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:37.079 01:07:59 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:37.079 01:07:59 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:37.079 01:07:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.079 01:07:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:37.079 01:07:59 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:37.079 01:07:59 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:37.079 01:07:59 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:37.079 01:07:59 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:37.079 01:07:59 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:37.079 01:07:59 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.079 01:07:59 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.079 01:07:59 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:37.079 01:07:59 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:37.079 01:07:59 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:37.079 01:07:59 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.079 01:07:59 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:37.079 01:07:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:37.079 01:07:59 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:37.079 01:07:59 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:37.079 01:07:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.079 01:07:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:37.079 ************************************ 00:07:37.079 START TEST accel_missing_filename 00:07:37.079 ************************************ 00:07:37.079 01:07:59 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:37.079 01:07:59 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:37.079 01:07:59 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:37.079 01:07:59 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:37.079 01:07:59 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:37.079 01:07:59 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:37.079 01:07:59 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:37.079 01:07:59 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:37.079 01:07:59 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:37.079 01:07:59 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:37.079 01:07:59 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:37.079 01:07:59 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:37.079 01:07:59 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.079 01:07:59 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.079 01:07:59 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:37.079 01:07:59 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:37.079 01:07:59 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:37.079 [2024-07-25 01:07:59.562787] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:37.079 [2024-07-25 01:07:59.562835] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid732523 ] 00:07:37.339 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.339 [2024-07-25 01:07:59.616913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.339 [2024-07-25 01:07:59.688213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.339 [2024-07-25 01:07:59.728329] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:37.339 [2024-07-25 01:07:59.787085] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:07:37.600 A filename is required. 00:07:37.600 01:07:59 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:37.600 01:07:59 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:37.600 01:07:59 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:37.600 01:07:59 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:37.600 01:07:59 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:37.600 01:07:59 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:37.600 00:07:37.600 real 0m0.321s 00:07:37.600 user 0m0.251s 00:07:37.600 sys 0m0.107s 00:07:37.600 01:07:59 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.600 01:07:59 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:37.600 ************************************ 00:07:37.600 END TEST accel_missing_filename 00:07:37.600 ************************************ 00:07:37.600 01:07:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:37.600 01:07:59 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:37.600 01:07:59 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:37.600 01:07:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.600 01:07:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:37.600 ************************************ 00:07:37.600 START TEST accel_compress_verify 00:07:37.600 ************************************ 00:07:37.600 01:07:59 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:37.600 01:07:59 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:37.600 01:07:59 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:37.600 01:07:59 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:37.600 01:07:59 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:37.600 01:07:59 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:37.600 01:07:59 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:37.600 01:07:59 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:37.600 01:07:59 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:37.600 01:07:59 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:37.600 01:07:59 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:37.600 01:07:59 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:37.600 01:07:59 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.600 01:07:59 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.600 01:07:59 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:37.600 01:07:59 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:37.600 01:07:59 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:37.600 [2024-07-25 01:07:59.945790] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:37.600 [2024-07-25 01:07:59.945861] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid732671 ] 00:07:37.600 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.600 [2024-07-25 01:08:00.000343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.600 [2024-07-25 01:08:00.085989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.863 [2024-07-25 01:08:00.126912] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:37.863 [2024-07-25 01:08:00.186501] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:07:37.863 00:07:37.863 Compression does not support the verify option, aborting. 00:07:37.863 01:08:00 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:37.863 01:08:00 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:37.863 01:08:00 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:37.863 01:08:00 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:37.863 01:08:00 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:37.863 01:08:00 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:37.863 00:07:37.863 real 0m0.338s 00:07:37.863 user 0m0.260s 00:07:37.863 sys 0m0.115s 00:07:37.863 01:08:00 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.863 01:08:00 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:37.863 ************************************ 00:07:37.863 END TEST accel_compress_verify 00:07:37.863 ************************************ 00:07:37.863 01:08:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:37.863 01:08:00 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:37.863 01:08:00 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:37.863 01:08:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.863 01:08:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:37.863 ************************************ 00:07:37.863 START TEST accel_wrong_workload 00:07:37.863 ************************************ 00:07:37.863 01:08:00 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:37.863 01:08:00 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:37.863 01:08:00 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:37.863 01:08:00 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:37.863 01:08:00 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:37.863 01:08:00 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:37.863 01:08:00 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:37.863 01:08:00 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:37.863 01:08:00 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:37.863 01:08:00 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:37.863 01:08:00 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:37.863 01:08:00 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:37.863 01:08:00 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.863 01:08:00 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.863 01:08:00 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:37.863 01:08:00 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:37.863 01:08:00 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:37.863 Unsupported workload type: foobar 00:07:37.863 [2024-07-25 01:08:00.340189] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:37.863 accel_perf options: 00:07:37.863 [-h help message] 00:07:37.863 [-q queue depth per core] 00:07:37.863 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:37.863 [-T number of threads per core 00:07:37.863 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:37.863 [-t time in seconds] 00:07:37.863 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:37.863 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:37.863 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:37.863 [-l for compress/decompress workloads, name of uncompressed input file 00:07:37.863 [-S for crc32c workload, use this seed value (default 0) 00:07:37.863 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:37.863 [-f for fill workload, use this BYTE value (default 255) 00:07:37.863 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:37.863 [-y verify result if this switch is on] 00:07:37.863 [-a tasks to allocate per core (default: same value as -q)] 00:07:37.863 Can be used to spread operations across a wider range of memory. 00:07:37.863 01:08:00 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:37.863 01:08:00 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:37.863 01:08:00 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:37.863 01:08:00 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:37.863 00:07:37.863 real 0m0.028s 00:07:37.863 user 0m0.018s 00:07:37.863 sys 0m0.009s 00:07:37.864 01:08:00 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.864 01:08:00 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:37.864 ************************************ 00:07:37.864 END TEST accel_wrong_workload 00:07:37.864 ************************************ 00:07:38.124 Error: writing output failed: Broken pipe 00:07:38.124 01:08:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:38.124 01:08:00 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:38.124 01:08:00 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:38.124 01:08:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.124 01:08:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:38.124 ************************************ 00:07:38.124 START TEST accel_negative_buffers 00:07:38.124 ************************************ 00:07:38.124 01:08:00 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:38.124 01:08:00 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:38.124 01:08:00 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:38.124 01:08:00 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:38.124 01:08:00 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:38.124 01:08:00 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:38.124 01:08:00 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:38.125 01:08:00 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:38.125 01:08:00 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:38.125 01:08:00 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:38.125 01:08:00 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.125 01:08:00 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.125 01:08:00 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.125 01:08:00 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.125 01:08:00 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.125 01:08:00 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:38.125 01:08:00 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:38.125 -x option must be non-negative. 00:07:38.125 [2024-07-25 01:08:00.437915] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:38.125 accel_perf options: 00:07:38.125 [-h help message] 00:07:38.125 [-q queue depth per core] 00:07:38.125 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:38.125 [-T number of threads per core 00:07:38.125 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:38.125 [-t time in seconds] 00:07:38.125 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:38.125 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:38.125 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:38.125 [-l for compress/decompress workloads, name of uncompressed input file 00:07:38.125 [-S for crc32c workload, use this seed value (default 0) 00:07:38.125 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:38.125 [-f for fill workload, use this BYTE value (default 255) 00:07:38.125 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:38.125 [-y verify result if this switch is on] 00:07:38.125 [-a tasks to allocate per core (default: same value as -q)] 00:07:38.125 Can be used to spread operations across a wider range of memory. 00:07:38.125 01:08:00 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:38.125 01:08:00 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:38.125 01:08:00 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:38.125 01:08:00 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:38.125 00:07:38.125 real 0m0.032s 00:07:38.125 user 0m0.020s 00:07:38.125 sys 0m0.012s 00:07:38.125 01:08:00 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.125 01:08:00 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:38.125 ************************************ 00:07:38.125 END TEST accel_negative_buffers 00:07:38.125 ************************************ 00:07:38.125 Error: writing output failed: Broken pipe 00:07:38.125 01:08:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:38.125 01:08:00 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:38.125 01:08:00 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:38.125 01:08:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.125 01:08:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:38.125 ************************************ 00:07:38.125 START TEST accel_crc32c 00:07:38.125 ************************************ 00:07:38.125 01:08:00 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:38.125 01:08:00 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:38.125 01:08:00 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:38.125 01:08:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:38.125 01:08:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:38.125 01:08:00 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:38.125 01:08:00 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:38.125 01:08:00 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:38.125 01:08:00 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.125 01:08:00 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.125 01:08:00 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.125 01:08:00 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.125 01:08:00 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.125 01:08:00 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:38.125 01:08:00 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:38.125 [2024-07-25 01:08:00.530718] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:38.125 [2024-07-25 01:08:00.530780] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid732827 ] 00:07:38.125 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.125 [2024-07-25 01:08:00.586007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.385 [2024-07-25 01:08:00.659259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:38.385 01:08:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.768 01:08:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:39.768 01:08:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.768 01:08:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.768 01:08:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.768 01:08:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:39.768 01:08:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.768 01:08:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.768 01:08:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.768 01:08:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:39.768 01:08:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.768 01:08:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.768 01:08:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.768 01:08:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:39.768 01:08:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.768 01:08:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.768 01:08:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.768 01:08:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:39.768 01:08:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.768 01:08:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.768 01:08:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.768 01:08:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:39.768 01:08:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.768 01:08:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.768 01:08:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.768 01:08:01 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:39.768 01:08:01 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:39.768 01:08:01 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:39.768 00:07:39.768 real 0m1.329s 00:07:39.768 user 0m1.210s 00:07:39.768 sys 0m0.121s 00:07:39.768 01:08:01 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.768 01:08:01 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:39.768 ************************************ 00:07:39.768 END TEST accel_crc32c 00:07:39.768 ************************************ 00:07:39.768 01:08:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:39.768 01:08:01 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:39.768 01:08:01 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:39.768 01:08:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.768 01:08:01 accel -- common/autotest_common.sh@10 -- # set +x 00:07:39.768 ************************************ 00:07:39.768 START TEST accel_crc32c_C2 00:07:39.768 ************************************ 00:07:39.768 01:08:01 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:39.768 01:08:01 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:39.768 01:08:01 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:39.768 01:08:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.768 01:08:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.768 01:08:01 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:39.768 01:08:01 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:39.768 01:08:01 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:39.768 01:08:01 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:39.768 01:08:01 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:39.768 01:08:01 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.768 01:08:01 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.768 01:08:01 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:39.768 01:08:01 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:39.768 01:08:01 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:39.768 [2024-07-25 01:08:01.910956] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:39.768 [2024-07-25 01:08:01.911005] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid733076 ] 00:07:39.768 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.768 [2024-07-25 01:08:01.964028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.768 [2024-07-25 01:08:02.035494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.768 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:39.769 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.769 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.769 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.769 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:39.769 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.769 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.769 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.769 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:39.769 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.769 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.769 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.769 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:39.769 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.769 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.769 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.769 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:39.769 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.769 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.769 01:08:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:41.151 01:08:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:41.151 01:08:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.151 01:08:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:41.151 01:08:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:41.151 01:08:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:41.151 01:08:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.151 01:08:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:41.151 01:08:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:41.151 01:08:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:41.151 01:08:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.151 01:08:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:41.151 01:08:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:41.151 01:08:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:41.151 01:08:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.151 01:08:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:41.151 01:08:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:41.151 01:08:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:41.151 01:08:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.151 01:08:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:41.151 01:08:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:41.151 01:08:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:41.151 01:08:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.151 01:08:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:41.151 01:08:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:41.151 01:08:03 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:41.151 01:08:03 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:41.151 01:08:03 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.151 00:07:41.151 real 0m1.319s 00:07:41.151 user 0m1.213s 00:07:41.151 sys 0m0.107s 00:07:41.151 01:08:03 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.151 01:08:03 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:41.151 ************************************ 00:07:41.151 END TEST accel_crc32c_C2 00:07:41.151 ************************************ 00:07:41.151 01:08:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:41.151 01:08:03 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:41.151 01:08:03 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:41.151 01:08:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.151 01:08:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:41.151 ************************************ 00:07:41.151 START TEST accel_copy 00:07:41.151 ************************************ 00:07:41.151 01:08:03 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:41.151 [2024-07-25 01:08:03.292252] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:41.151 [2024-07-25 01:08:03.292300] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid733331 ] 00:07:41.151 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.151 [2024-07-25 01:08:03.345238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.151 [2024-07-25 01:08:03.415912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.151 01:08:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.535 01:08:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:42.535 01:08:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.535 01:08:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.535 01:08:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.535 01:08:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:42.535 01:08:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.535 01:08:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.535 01:08:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.535 01:08:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:42.535 01:08:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.535 01:08:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.535 01:08:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.535 01:08:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:42.535 01:08:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.535 01:08:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.535 01:08:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.535 01:08:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:42.535 01:08:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.535 01:08:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.535 01:08:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.535 01:08:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:42.535 01:08:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.535 01:08:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.535 01:08:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.535 01:08:04 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:42.535 01:08:04 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:42.535 01:08:04 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:42.535 00:07:42.535 real 0m1.322s 00:07:42.535 user 0m1.215s 00:07:42.535 sys 0m0.110s 00:07:42.535 01:08:04 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.535 01:08:04 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:42.535 ************************************ 00:07:42.535 END TEST accel_copy 00:07:42.535 ************************************ 00:07:42.535 01:08:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:42.535 01:08:04 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:42.535 01:08:04 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:42.535 01:08:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.535 01:08:04 accel -- common/autotest_common.sh@10 -- # set +x 00:07:42.535 ************************************ 00:07:42.535 START TEST accel_fill 00:07:42.535 ************************************ 00:07:42.535 01:08:04 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:42.535 [2024-07-25 01:08:04.674882] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:42.535 [2024-07-25 01:08:04.674948] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid733578 ] 00:07:42.535 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.535 [2024-07-25 01:08:04.728625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.535 [2024-07-25 01:08:04.800062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.535 01:08:04 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:42.536 01:08:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.536 01:08:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.536 01:08:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.536 01:08:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:42.536 01:08:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.536 01:08:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.536 01:08:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:42.536 01:08:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:42.536 01:08:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:42.536 01:08:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:42.536 01:08:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:43.919 01:08:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:43.919 01:08:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:43.919 01:08:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:43.919 01:08:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:43.919 01:08:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:43.919 01:08:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:43.919 01:08:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:43.919 01:08:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:43.919 01:08:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:43.919 01:08:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:43.919 01:08:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:43.919 01:08:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:43.919 01:08:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:43.919 01:08:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:43.919 01:08:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:43.919 01:08:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:43.919 01:08:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:43.919 01:08:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:43.919 01:08:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:43.919 01:08:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:43.919 01:08:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:43.919 01:08:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:43.919 01:08:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:43.919 01:08:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:43.919 01:08:05 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:43.919 01:08:05 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:43.919 01:08:05 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.919 00:07:43.919 real 0m1.325s 00:07:43.919 user 0m1.221s 00:07:43.919 sys 0m0.107s 00:07:43.919 01:08:05 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.919 01:08:05 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:43.919 ************************************ 00:07:43.919 END TEST accel_fill 00:07:43.919 ************************************ 00:07:43.919 01:08:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:43.919 01:08:06 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:43.919 01:08:06 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:43.919 01:08:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.919 01:08:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:43.919 ************************************ 00:07:43.919 START TEST accel_copy_crc32c 00:07:43.919 ************************************ 00:07:43.919 01:08:06 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:43.919 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:43.919 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:43.919 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.919 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.919 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:43.919 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:43.919 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:43.919 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:43.919 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:43.919 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.919 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.919 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:43.919 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:43.919 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:43.919 [2024-07-25 01:08:06.052147] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:43.919 [2024-07-25 01:08:06.052194] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid733833 ] 00:07:43.919 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.919 [2024-07-25 01:08:06.104521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.919 [2024-07-25 01:08:06.175787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.919 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:43.919 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.919 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.919 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.919 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.920 01:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:44.860 01:08:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:44.860 01:08:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:44.860 01:08:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:44.860 01:08:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:44.860 01:08:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:44.860 01:08:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:44.860 01:08:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:44.860 01:08:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:44.860 01:08:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:44.860 01:08:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:44.860 01:08:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:44.860 01:08:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:44.860 01:08:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:44.860 01:08:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:44.860 01:08:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:44.860 01:08:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:44.860 01:08:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:44.860 01:08:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:44.860 01:08:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:44.860 01:08:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:44.860 01:08:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:44.860 01:08:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:44.860 01:08:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:44.860 01:08:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:44.860 01:08:07 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:44.860 01:08:07 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:44.860 01:08:07 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.860 00:07:44.860 real 0m1.318s 00:07:44.860 user 0m1.217s 00:07:44.860 sys 0m0.104s 00:07:44.860 01:08:07 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.860 01:08:07 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:44.860 ************************************ 00:07:44.860 END TEST accel_copy_crc32c 00:07:44.860 ************************************ 00:07:45.177 01:08:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:45.177 01:08:07 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:45.177 01:08:07 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:45.177 01:08:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.177 01:08:07 accel -- common/autotest_common.sh@10 -- # set +x 00:07:45.177 ************************************ 00:07:45.177 START TEST accel_copy_crc32c_C2 00:07:45.177 ************************************ 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:45.177 [2024-07-25 01:08:07.439291] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:45.177 [2024-07-25 01:08:07.439358] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid734080 ] 00:07:45.177 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.177 [2024-07-25 01:08:07.494701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.177 [2024-07-25 01:08:07.566328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.177 01:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:46.558 01:08:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:46.558 01:08:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.558 01:08:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:46.558 01:08:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:46.558 01:08:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:46.558 01:08:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.558 01:08:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:46.558 01:08:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:46.558 01:08:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:46.558 01:08:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.558 01:08:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:46.558 01:08:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:46.558 01:08:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:46.558 01:08:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.558 01:08:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:46.558 01:08:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:46.558 01:08:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:46.558 01:08:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.558 01:08:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:46.558 01:08:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:46.558 01:08:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:46.558 01:08:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.558 01:08:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:46.558 01:08:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:46.558 01:08:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:46.558 01:08:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:46.558 01:08:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:46.558 00:07:46.558 real 0m1.329s 00:07:46.558 user 0m1.223s 00:07:46.558 sys 0m0.108s 00:07:46.558 01:08:08 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.558 01:08:08 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:46.558 ************************************ 00:07:46.558 END TEST accel_copy_crc32c_C2 00:07:46.558 ************************************ 00:07:46.558 01:08:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:46.558 01:08:08 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:46.558 01:08:08 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:46.558 01:08:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.558 01:08:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:46.558 ************************************ 00:07:46.558 START TEST accel_dualcast 00:07:46.558 ************************************ 00:07:46.558 01:08:08 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:46.558 [2024-07-25 01:08:08.825645] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:46.558 [2024-07-25 01:08:08.825690] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid734327 ] 00:07:46.558 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.558 [2024-07-25 01:08:08.878728] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.558 [2024-07-25 01:08:08.949872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:46.558 01:08:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:46.559 01:08:08 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:46.559 01:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:46.559 01:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:46.559 01:08:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:46.559 01:08:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:46.559 01:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:46.559 01:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:46.559 01:08:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:46.559 01:08:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:46.559 01:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:46.559 01:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:46.559 01:08:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:46.559 01:08:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:46.559 01:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:46.559 01:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:46.559 01:08:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:46.559 01:08:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:46.559 01:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:46.559 01:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:46.559 01:08:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:46.559 01:08:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:46.559 01:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:46.559 01:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:46.559 01:08:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:46.559 01:08:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:46.559 01:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:46.559 01:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:46.559 01:08:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:46.559 01:08:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:46.559 01:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:46.559 01:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:47.940 01:08:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:47.940 01:08:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:47.940 01:08:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:47.940 01:08:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:47.940 01:08:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:47.940 01:08:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:47.940 01:08:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:47.940 01:08:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:47.940 01:08:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:47.940 01:08:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:47.940 01:08:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:47.940 01:08:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:47.940 01:08:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:47.940 01:08:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:47.940 01:08:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:47.940 01:08:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:47.940 01:08:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:47.940 01:08:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:47.940 01:08:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:47.940 01:08:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:47.940 01:08:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:47.940 01:08:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:47.940 01:08:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:47.940 01:08:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:47.940 01:08:10 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:47.940 01:08:10 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:47.940 01:08:10 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:47.940 00:07:47.940 real 0m1.329s 00:07:47.940 user 0m1.221s 00:07:47.940 sys 0m0.110s 00:07:47.940 01:08:10 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.940 01:08:10 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:47.940 ************************************ 00:07:47.940 END TEST accel_dualcast 00:07:47.940 ************************************ 00:07:47.940 01:08:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:47.940 01:08:10 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:47.940 01:08:10 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:47.940 01:08:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.940 01:08:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:47.940 ************************************ 00:07:47.940 START TEST accel_compare 00:07:47.940 ************************************ 00:07:47.940 01:08:10 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:47.940 [2024-07-25 01:08:10.217268] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:47.940 [2024-07-25 01:08:10.217334] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid734582 ] 00:07:47.940 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.940 [2024-07-25 01:08:10.273248] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.940 [2024-07-25 01:08:10.346919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:47.940 01:08:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:47.941 01:08:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:47.941 01:08:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:47.941 01:08:10 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:47.941 01:08:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:47.941 01:08:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:47.941 01:08:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:47.941 01:08:10 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:47.941 01:08:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:47.941 01:08:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:47.941 01:08:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:47.941 01:08:10 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:47.941 01:08:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:47.941 01:08:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:47.941 01:08:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:47.941 01:08:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:47.941 01:08:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:47.941 01:08:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:47.941 01:08:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:47.941 01:08:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:47.941 01:08:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:47.941 01:08:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:47.941 01:08:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:49.321 01:08:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:49.321 01:08:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:49.321 01:08:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:49.321 01:08:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:49.322 01:08:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:49.322 01:08:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:49.322 01:08:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:49.322 01:08:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:49.322 01:08:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:49.322 01:08:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:49.322 01:08:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:49.322 01:08:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:49.322 01:08:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:49.322 01:08:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:49.322 01:08:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:49.322 01:08:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:49.322 01:08:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:49.322 01:08:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:49.322 01:08:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:49.322 01:08:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:49.322 01:08:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:49.322 01:08:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:49.322 01:08:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:49.322 01:08:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:49.322 01:08:11 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:49.322 01:08:11 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:49.322 01:08:11 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:49.322 00:07:49.322 real 0m1.331s 00:07:49.322 user 0m1.225s 00:07:49.322 sys 0m0.108s 00:07:49.322 01:08:11 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.322 01:08:11 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:49.322 ************************************ 00:07:49.322 END TEST accel_compare 00:07:49.322 ************************************ 00:07:49.322 01:08:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:49.322 01:08:11 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:49.322 01:08:11 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:49.322 01:08:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.322 01:08:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:49.322 ************************************ 00:07:49.322 START TEST accel_xor 00:07:49.322 ************************************ 00:07:49.322 01:08:11 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:49.322 [2024-07-25 01:08:11.603891] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:49.322 [2024-07-25 01:08:11.603938] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid734828 ] 00:07:49.322 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.322 [2024-07-25 01:08:11.657582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.322 [2024-07-25 01:08:11.728838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:49.322 01:08:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:50.703 01:08:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:50.703 01:08:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:50.703 01:08:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:50.703 01:08:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:50.703 01:08:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:50.703 01:08:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:50.703 01:08:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:50.703 01:08:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:50.703 01:08:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:50.703 01:08:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:50.703 01:08:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:50.703 01:08:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:50.703 01:08:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:50.703 01:08:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:50.703 01:08:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:50.703 01:08:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:50.703 01:08:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:50.703 01:08:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:50.703 01:08:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:50.703 01:08:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:50.703 01:08:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:50.703 01:08:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:50.703 01:08:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:50.703 01:08:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:50.703 01:08:12 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:50.703 01:08:12 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:50.703 01:08:12 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:50.703 00:07:50.703 real 0m1.325s 00:07:50.703 user 0m1.215s 00:07:50.703 sys 0m0.112s 00:07:50.703 01:08:12 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.703 01:08:12 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:50.703 ************************************ 00:07:50.703 END TEST accel_xor 00:07:50.703 ************************************ 00:07:50.703 01:08:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:50.703 01:08:12 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:50.703 01:08:12 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:50.703 01:08:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.703 01:08:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:50.703 ************************************ 00:07:50.703 START TEST accel_xor 00:07:50.703 ************************************ 00:07:50.703 01:08:12 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:50.703 01:08:12 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:50.703 01:08:12 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:50.703 01:08:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:50.704 01:08:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:50.704 01:08:12 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:50.704 01:08:12 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:50.704 01:08:12 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:50.704 01:08:12 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:50.704 01:08:12 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:50.704 01:08:12 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.704 01:08:12 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.704 01:08:12 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:50.704 01:08:12 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:50.704 01:08:12 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:50.704 [2024-07-25 01:08:12.987632] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:50.704 [2024-07-25 01:08:12.987679] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid735075 ] 00:07:50.704 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.704 [2024-07-25 01:08:13.040987] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.704 [2024-07-25 01:08:13.112155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:50.704 01:08:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:52.085 01:08:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:52.085 01:08:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:52.085 01:08:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:52.085 01:08:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:52.085 01:08:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:52.085 01:08:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:52.085 01:08:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:52.085 01:08:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:52.085 01:08:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:52.085 01:08:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:52.085 01:08:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:52.085 01:08:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:52.085 01:08:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:52.085 01:08:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:52.085 01:08:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:52.085 01:08:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:52.085 01:08:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:52.085 01:08:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:52.085 01:08:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:52.085 01:08:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:52.085 01:08:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:52.085 01:08:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:52.085 01:08:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:52.085 01:08:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:52.085 01:08:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:52.085 01:08:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:52.085 01:08:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:52.085 00:07:52.085 real 0m1.325s 00:07:52.085 user 0m1.214s 00:07:52.085 sys 0m0.114s 00:07:52.085 01:08:14 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.085 01:08:14 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:52.085 ************************************ 00:07:52.085 END TEST accel_xor 00:07:52.085 ************************************ 00:07:52.085 01:08:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:52.085 01:08:14 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:52.085 01:08:14 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:52.085 01:08:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.085 01:08:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:52.085 ************************************ 00:07:52.085 START TEST accel_dif_verify 00:07:52.085 ************************************ 00:07:52.085 01:08:14 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:52.085 01:08:14 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:52.085 01:08:14 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:52.085 01:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:52.085 01:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:52.085 01:08:14 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:52.085 01:08:14 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:52.085 01:08:14 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:52.085 01:08:14 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:52.085 01:08:14 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:52.085 01:08:14 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.085 01:08:14 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.085 01:08:14 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:52.085 01:08:14 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:52.085 01:08:14 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:52.085 [2024-07-25 01:08:14.372103] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:52.085 [2024-07-25 01:08:14.372172] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid735326 ] 00:07:52.085 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.085 [2024-07-25 01:08:14.426478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.085 [2024-07-25 01:08:14.497742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.085 01:08:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:52.085 01:08:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:52.085 01:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:52.085 01:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:52.085 01:08:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:52.085 01:08:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:52.085 01:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:52.085 01:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:52.085 01:08:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:52.085 01:08:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:52.085 01:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:52.085 01:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:52.085 01:08:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:52.085 01:08:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:52.085 01:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:52.085 01:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:52.085 01:08:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:52.086 01:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:53.468 01:08:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:53.468 01:08:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:53.468 01:08:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:53.468 01:08:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:53.468 01:08:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:53.468 01:08:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:53.468 01:08:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:53.468 01:08:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:53.468 01:08:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:53.468 01:08:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:53.468 01:08:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:53.468 01:08:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:53.468 01:08:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:53.468 01:08:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:53.468 01:08:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:53.468 01:08:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:53.468 01:08:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:53.468 01:08:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:53.468 01:08:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:53.468 01:08:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:53.468 01:08:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:53.468 01:08:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:53.468 01:08:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:53.468 01:08:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:53.468 01:08:15 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:53.468 01:08:15 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:53.468 01:08:15 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:53.468 00:07:53.468 real 0m1.326s 00:07:53.468 user 0m1.218s 00:07:53.468 sys 0m0.110s 00:07:53.468 01:08:15 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:53.468 01:08:15 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:53.468 ************************************ 00:07:53.468 END TEST accel_dif_verify 00:07:53.468 ************************************ 00:07:53.468 01:08:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:53.468 01:08:15 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:53.468 01:08:15 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:53.468 01:08:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.468 01:08:15 accel -- common/autotest_common.sh@10 -- # set +x 00:07:53.468 ************************************ 00:07:53.468 START TEST accel_dif_generate 00:07:53.468 ************************************ 00:07:53.468 01:08:15 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:53.468 [2024-07-25 01:08:15.755818] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:53.468 [2024-07-25 01:08:15.755880] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid735578 ] 00:07:53.468 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.468 [2024-07-25 01:08:15.810201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.468 [2024-07-25 01:08:15.881063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:53.468 01:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:53.469 01:08:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:53.469 01:08:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:53.469 01:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:53.469 01:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:53.469 01:08:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:53.469 01:08:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:53.469 01:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:53.469 01:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:53.469 01:08:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:53.469 01:08:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:53.469 01:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:53.469 01:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:53.469 01:08:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:53.469 01:08:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:53.469 01:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:53.469 01:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:53.469 01:08:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:53.469 01:08:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:53.469 01:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:53.469 01:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:53.469 01:08:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:53.469 01:08:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:53.469 01:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:53.469 01:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:54.852 01:08:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:54.852 01:08:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:54.852 01:08:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:54.852 01:08:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:54.852 01:08:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:54.852 01:08:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:54.852 01:08:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:54.852 01:08:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:54.852 01:08:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:54.852 01:08:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:54.852 01:08:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:54.852 01:08:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:54.852 01:08:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:54.852 01:08:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:54.852 01:08:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:54.852 01:08:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:54.852 01:08:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:54.852 01:08:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:54.852 01:08:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:54.852 01:08:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:54.852 01:08:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:54.852 01:08:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:54.852 01:08:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:54.852 01:08:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:54.852 01:08:17 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:54.852 01:08:17 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:54.852 01:08:17 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:54.852 00:07:54.852 real 0m1.323s 00:07:54.852 user 0m1.215s 00:07:54.852 sys 0m0.110s 00:07:54.852 01:08:17 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.852 01:08:17 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:54.852 ************************************ 00:07:54.852 END TEST accel_dif_generate 00:07:54.852 ************************************ 00:07:54.852 01:08:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:54.852 01:08:17 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:54.852 01:08:17 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:54.852 01:08:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.852 01:08:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:54.852 ************************************ 00:07:54.852 START TEST accel_dif_generate_copy 00:07:54.852 ************************************ 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:54.852 [2024-07-25 01:08:17.137728] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:54.852 [2024-07-25 01:08:17.137787] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid735825 ] 00:07:54.852 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.852 [2024-07-25 01:08:17.193447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.852 [2024-07-25 01:08:17.264558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:54.852 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:54.853 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:54.853 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:54.853 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:54.853 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:54.853 01:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:56.277 01:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:56.277 01:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:56.277 01:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:56.277 01:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:56.278 01:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:56.278 01:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:56.278 01:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:56.278 01:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:56.278 01:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:56.278 01:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:56.278 01:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:56.278 01:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:56.278 01:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:56.278 01:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:56.278 01:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:56.278 01:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:56.278 01:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:56.278 01:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:56.278 01:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:56.278 01:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:56.278 01:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:56.278 01:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:56.278 01:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:56.278 01:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:56.278 01:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:56.278 01:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:56.278 01:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:56.278 00:07:56.278 real 0m1.326s 00:07:56.278 user 0m1.213s 00:07:56.278 sys 0m0.114s 00:07:56.278 01:08:18 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.278 01:08:18 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:56.278 ************************************ 00:07:56.278 END TEST accel_dif_generate_copy 00:07:56.278 ************************************ 00:07:56.278 01:08:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:56.278 01:08:18 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:56.278 01:08:18 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:56.278 01:08:18 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:56.278 01:08:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.278 01:08:18 accel -- common/autotest_common.sh@10 -- # set +x 00:07:56.278 ************************************ 00:07:56.278 START TEST accel_comp 00:07:56.278 ************************************ 00:07:56.278 01:08:18 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:56.278 [2024-07-25 01:08:18.518908] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:56.278 [2024-07-25 01:08:18.518956] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid736077 ] 00:07:56.278 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.278 [2024-07-25 01:08:18.571516] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.278 [2024-07-25 01:08:18.643114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:56.278 01:08:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:57.661 01:08:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:57.661 01:08:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:57.661 01:08:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:57.661 01:08:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:57.661 01:08:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:57.661 01:08:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:57.661 01:08:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:57.661 01:08:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:57.661 01:08:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:57.661 01:08:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:57.661 01:08:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:57.661 01:08:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:57.661 01:08:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:57.661 01:08:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:57.661 01:08:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:57.661 01:08:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:57.661 01:08:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:57.661 01:08:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:57.661 01:08:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:57.661 01:08:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:57.661 01:08:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:57.661 01:08:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:57.661 01:08:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:57.661 01:08:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:57.661 01:08:19 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:57.661 01:08:19 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:57.661 01:08:19 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:57.661 00:07:57.661 real 0m1.326s 00:07:57.661 user 0m1.217s 00:07:57.661 sys 0m0.110s 00:07:57.661 01:08:19 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:57.661 01:08:19 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:57.661 ************************************ 00:07:57.661 END TEST accel_comp 00:07:57.661 ************************************ 00:07:57.661 01:08:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:57.661 01:08:19 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:57.661 01:08:19 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:57.661 01:08:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.661 01:08:19 accel -- common/autotest_common.sh@10 -- # set +x 00:07:57.661 ************************************ 00:07:57.661 START TEST accel_decomp 00:07:57.661 ************************************ 00:07:57.661 01:08:19 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:57.661 01:08:19 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:57.661 01:08:19 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:57.661 01:08:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:57.661 01:08:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:57.661 01:08:19 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:57.661 01:08:19 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:57.661 01:08:19 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:57.661 01:08:19 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:57.661 01:08:19 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:57.661 01:08:19 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:57.661 01:08:19 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:57.661 01:08:19 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:57.661 01:08:19 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:57.661 01:08:19 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:57.661 [2024-07-25 01:08:19.892670] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:57.661 [2024-07-25 01:08:19.892719] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid736324 ] 00:07:57.661 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.661 [2024-07-25 01:08:19.945941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.661 [2024-07-25 01:08:20.020046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:57.661 01:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:57.662 01:08:20 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:57.662 01:08:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:57.662 01:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:57.662 01:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:57.662 01:08:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:57.662 01:08:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:57.662 01:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:57.662 01:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:57.662 01:08:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:57.662 01:08:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:57.662 01:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:57.662 01:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:57.662 01:08:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:57.662 01:08:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:57.662 01:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:57.662 01:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:59.044 01:08:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:59.044 01:08:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:59.044 01:08:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:59.044 01:08:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:59.044 01:08:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:59.044 01:08:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:59.044 01:08:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:59.044 01:08:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:59.044 01:08:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:59.044 01:08:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:59.044 01:08:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:59.044 01:08:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:59.044 01:08:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:59.044 01:08:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:59.044 01:08:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:59.044 01:08:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:59.044 01:08:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:59.044 01:08:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:59.044 01:08:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:59.044 01:08:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:59.044 01:08:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:59.044 01:08:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:59.044 01:08:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:59.045 01:08:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:59.045 01:08:21 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:59.045 01:08:21 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:59.045 01:08:21 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:59.045 00:07:59.045 real 0m1.330s 00:07:59.045 user 0m1.223s 00:07:59.045 sys 0m0.109s 00:07:59.045 01:08:21 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:59.045 01:08:21 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:59.045 ************************************ 00:07:59.045 END TEST accel_decomp 00:07:59.045 ************************************ 00:07:59.045 01:08:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:59.045 01:08:21 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:59.045 01:08:21 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:59.045 01:08:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.045 01:08:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:59.045 ************************************ 00:07:59.045 START TEST accel_decomp_full 00:07:59.045 ************************************ 00:07:59.045 01:08:21 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:59.045 [2024-07-25 01:08:21.284976] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:59.045 [2024-07-25 01:08:21.285041] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid736578 ] 00:07:59.045 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.045 [2024-07-25 01:08:21.339406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.045 [2024-07-25 01:08:21.410660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:59.045 01:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:00.428 01:08:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:00.428 01:08:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:00.428 01:08:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:00.428 01:08:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:00.428 01:08:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:00.428 01:08:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:00.428 01:08:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:00.428 01:08:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:00.428 01:08:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:00.428 01:08:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:00.428 01:08:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:00.428 01:08:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:00.428 01:08:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:00.428 01:08:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:00.428 01:08:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:00.428 01:08:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:00.428 01:08:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:00.428 01:08:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:00.428 01:08:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:00.428 01:08:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:00.428 01:08:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:00.428 01:08:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:00.428 01:08:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:00.428 01:08:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:00.428 01:08:22 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:00.428 01:08:22 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:00.429 01:08:22 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:00.429 00:08:00.429 real 0m1.341s 00:08:00.429 user 0m1.229s 00:08:00.429 sys 0m0.114s 00:08:00.429 01:08:22 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.429 01:08:22 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:08:00.429 ************************************ 00:08:00.429 END TEST accel_decomp_full 00:08:00.429 ************************************ 00:08:00.429 01:08:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:00.429 01:08:22 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:00.429 01:08:22 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:00.429 01:08:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.429 01:08:22 accel -- common/autotest_common.sh@10 -- # set +x 00:08:00.429 ************************************ 00:08:00.429 START TEST accel_decomp_mcore 00:08:00.429 ************************************ 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:00.429 [2024-07-25 01:08:22.681355] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:08:00.429 [2024-07-25 01:08:22.681402] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid736825 ] 00:08:00.429 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.429 [2024-07-25 01:08:22.734661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:00.429 [2024-07-25 01:08:22.809017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.429 [2024-07-25 01:08:22.809125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:00.429 [2024-07-25 01:08:22.809152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:00.429 [2024-07-25 01:08:22.809154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:00.429 01:08:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:01.812 01:08:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:01.812 01:08:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:01.812 01:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:01.812 01:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:01.812 01:08:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:01.812 01:08:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:01.812 01:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:01.812 01:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:01.812 01:08:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:01.812 01:08:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:01.812 01:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:01.812 01:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:01.812 01:08:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:01.812 01:08:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:01.813 01:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:01.813 01:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:01.813 01:08:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:01.813 01:08:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:01.813 01:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:01.813 01:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:01.813 01:08:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:01.813 01:08:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:01.813 01:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:01.813 01:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:01.813 01:08:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:01.813 01:08:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:01.813 01:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:01.813 01:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:01.813 01:08:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:01.813 01:08:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:01.813 01:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:01.813 01:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:01.813 01:08:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:01.813 01:08:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:01.813 01:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:01.813 01:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:01.813 01:08:23 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:01.813 01:08:23 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:01.813 01:08:23 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:01.813 00:08:01.813 real 0m1.344s 00:08:01.813 user 0m4.568s 00:08:01.813 sys 0m0.117s 00:08:01.813 01:08:24 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:01.813 01:08:24 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:01.813 ************************************ 00:08:01.813 END TEST accel_decomp_mcore 00:08:01.813 ************************************ 00:08:01.813 01:08:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:01.813 01:08:24 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:01.813 01:08:24 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:01.813 01:08:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:01.813 01:08:24 accel -- common/autotest_common.sh@10 -- # set +x 00:08:01.813 ************************************ 00:08:01.813 START TEST accel_decomp_full_mcore 00:08:01.813 ************************************ 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:01.813 [2024-07-25 01:08:24.094407] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:08:01.813 [2024-07-25 01:08:24.094477] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid737073 ] 00:08:01.813 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.813 [2024-07-25 01:08:24.149913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:01.813 [2024-07-25 01:08:24.224449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.813 [2024-07-25 01:08:24.224543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.813 [2024-07-25 01:08:24.224648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:01.813 [2024-07-25 01:08:24.224650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:01.813 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:01.814 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:01.814 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:01.814 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:01.814 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:01.814 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:01.814 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:01.814 01:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.201 01:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:03.201 01:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:03.201 01:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.201 01:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.201 01:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:03.201 01:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:03.201 01:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.201 01:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.201 01:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:03.201 01:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:03.201 01:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.201 01:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.201 01:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:03.201 01:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:03.201 01:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.201 01:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.201 01:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:03.201 01:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:03.201 01:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.201 01:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.201 01:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:03.201 01:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:03.201 01:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.201 01:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.201 01:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:03.201 01:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:03.201 01:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.201 01:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.201 01:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:03.201 01:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:03.201 01:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.201 01:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.201 01:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:03.201 01:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:03.201 01:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:03.201 01:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:03.201 01:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:03.201 01:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:03.201 01:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:03.201 00:08:03.201 real 0m1.358s 00:08:03.201 user 0m4.608s 00:08:03.201 sys 0m0.114s 00:08:03.201 01:08:25 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.201 01:08:25 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:03.201 ************************************ 00:08:03.201 END TEST accel_decomp_full_mcore 00:08:03.201 ************************************ 00:08:03.201 01:08:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:03.201 01:08:25 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:03.201 01:08:25 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:03.201 01:08:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.201 01:08:25 accel -- common/autotest_common.sh@10 -- # set +x 00:08:03.201 ************************************ 00:08:03.201 START TEST accel_decomp_mthread 00:08:03.201 ************************************ 00:08:03.201 01:08:25 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:03.201 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:03.201 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:03.201 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:03.201 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:03.201 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:03.201 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:03.201 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:03.201 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:03.201 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:03.201 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:03.201 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:03.201 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:03.201 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:03.201 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:03.201 [2024-07-25 01:08:25.502396] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:08:03.201 [2024-07-25 01:08:25.502445] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid737332 ] 00:08:03.201 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.201 [2024-07-25 01:08:25.555653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.202 [2024-07-25 01:08:25.626965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:03.202 01:08:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:04.669 01:08:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:04.669 01:08:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:04.669 01:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:04.669 01:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:04.669 01:08:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:04.669 01:08:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:04.669 01:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:04.669 01:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:04.669 01:08:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:04.669 01:08:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:04.669 01:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:04.669 01:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:04.669 01:08:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:04.669 01:08:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:04.669 01:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:04.669 01:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:04.669 01:08:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:04.669 01:08:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:04.669 01:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:04.669 01:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:04.669 01:08:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:04.669 01:08:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:04.669 01:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:04.669 01:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:04.669 01:08:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:04.669 01:08:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:04.669 01:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:04.669 01:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:04.669 01:08:26 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:04.669 01:08:26 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:04.669 01:08:26 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:04.669 00:08:04.669 real 0m1.329s 00:08:04.669 user 0m1.216s 00:08:04.669 sys 0m0.115s 00:08:04.669 01:08:26 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.669 01:08:26 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:04.669 ************************************ 00:08:04.669 END TEST accel_decomp_mthread 00:08:04.669 ************************************ 00:08:04.669 01:08:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:04.669 01:08:26 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:04.669 01:08:26 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:04.669 01:08:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.670 01:08:26 accel -- common/autotest_common.sh@10 -- # set +x 00:08:04.670 ************************************ 00:08:04.670 START TEST accel_decomp_full_mthread 00:08:04.670 ************************************ 00:08:04.670 01:08:26 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:04.670 01:08:26 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:04.670 01:08:26 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:04.670 01:08:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:04.670 01:08:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:04.670 01:08:26 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:04.670 01:08:26 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:04.670 01:08:26 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:04.670 01:08:26 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:04.670 01:08:26 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:04.670 01:08:26 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:04.670 01:08:26 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:04.670 01:08:26 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:04.670 01:08:26 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:04.670 01:08:26 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:04.670 [2024-07-25 01:08:26.891162] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:08:04.670 [2024-07-25 01:08:26.891215] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid737579 ] 00:08:04.670 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.670 [2024-07-25 01:08:26.945278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.670 [2024-07-25 01:08:27.016756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:04.670 01:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:06.054 01:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:06.054 01:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:06.054 01:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:06.054 01:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:06.054 01:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:06.054 01:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:06.054 01:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:06.054 01:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:06.054 01:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:06.054 01:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:06.054 01:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:06.054 01:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:06.054 01:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:06.054 01:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:06.054 01:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:06.054 01:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:06.054 01:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:06.054 01:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:06.054 01:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:06.054 01:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:06.054 01:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:06.054 01:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:06.054 01:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:06.054 01:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:06.054 01:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:06.054 01:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:06.054 01:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:06.054 01:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:06.054 01:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:06.054 01:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:06.054 01:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:06.054 00:08:06.054 real 0m1.353s 00:08:06.054 user 0m1.245s 00:08:06.054 sys 0m0.110s 00:08:06.054 01:08:28 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.054 01:08:28 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:06.054 ************************************ 00:08:06.054 END TEST accel_decomp_full_mthread 00:08:06.054 ************************************ 00:08:06.054 01:08:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:06.054 01:08:28 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:08:06.054 01:08:28 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:06.054 01:08:28 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:06.054 01:08:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.054 01:08:28 accel -- common/autotest_common.sh@10 -- # set +x 00:08:06.054 01:08:28 accel -- accel/accel.sh@137 -- # build_accel_config 00:08:06.054 01:08:28 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:06.054 01:08:28 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:06.054 01:08:28 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:06.054 01:08:28 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:06.054 01:08:28 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:06.054 01:08:28 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:06.054 01:08:28 accel -- accel/accel.sh@41 -- # jq -r . 00:08:06.054 ************************************ 00:08:06.054 START TEST accel_dif_functional_tests 00:08:06.054 ************************************ 00:08:06.054 01:08:28 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:06.054 [2024-07-25 01:08:28.319616] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:08:06.054 [2024-07-25 01:08:28.319650] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid737831 ] 00:08:06.054 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.054 [2024-07-25 01:08:28.371406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:06.054 [2024-07-25 01:08:28.444486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.054 [2024-07-25 01:08:28.444584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:06.054 [2024-07-25 01:08:28.444586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.054 00:08:06.054 00:08:06.054 CUnit - A unit testing framework for C - Version 2.1-3 00:08:06.054 http://cunit.sourceforge.net/ 00:08:06.054 00:08:06.054 00:08:06.054 Suite: accel_dif 00:08:06.054 Test: verify: DIF generated, GUARD check ...passed 00:08:06.054 Test: verify: DIF generated, APPTAG check ...passed 00:08:06.054 Test: verify: DIF generated, REFTAG check ...passed 00:08:06.054 Test: verify: DIF not generated, GUARD check ...[2024-07-25 01:08:28.512040] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:06.054 passed 00:08:06.054 Test: verify: DIF not generated, APPTAG check ...[2024-07-25 01:08:28.512091] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:06.054 passed 00:08:06.054 Test: verify: DIF not generated, REFTAG check ...[2024-07-25 01:08:28.512125] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:06.054 passed 00:08:06.054 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:06.054 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-25 01:08:28.512168] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:06.054 passed 00:08:06.054 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:06.054 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:06.054 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:06.054 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-25 01:08:28.512271] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:06.054 passed 00:08:06.054 Test: verify copy: DIF generated, GUARD check ...passed 00:08:06.054 Test: verify copy: DIF generated, APPTAG check ...passed 00:08:06.054 Test: verify copy: DIF generated, REFTAG check ...passed 00:08:06.054 Test: verify copy: DIF not generated, GUARD check ...[2024-07-25 01:08:28.512387] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:06.054 passed 00:08:06.054 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-25 01:08:28.512409] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:06.054 passed 00:08:06.054 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-25 01:08:28.512428] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:06.054 passed 00:08:06.054 Test: generate copy: DIF generated, GUARD check ...passed 00:08:06.054 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:06.054 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:06.055 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:06.055 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:06.055 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:06.055 Test: generate copy: iovecs-len validate ...[2024-07-25 01:08:28.512588] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:06.055 passed 00:08:06.055 Test: generate copy: buffer alignment validate ...passed 00:08:06.055 00:08:06.055 Run Summary: Type Total Ran Passed Failed Inactive 00:08:06.055 suites 1 1 n/a 0 0 00:08:06.055 tests 26 26 26 0 0 00:08:06.055 asserts 115 115 115 0 n/a 00:08:06.055 00:08:06.055 Elapsed time = 0.002 seconds 00:08:06.315 00:08:06.315 real 0m0.403s 00:08:06.315 user 0m0.607s 00:08:06.315 sys 0m0.146s 00:08:06.315 01:08:28 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.315 01:08:28 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:08:06.315 ************************************ 00:08:06.315 END TEST accel_dif_functional_tests 00:08:06.315 ************************************ 00:08:06.315 01:08:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:06.315 00:08:06.315 real 0m30.633s 00:08:06.315 user 0m34.420s 00:08:06.315 sys 0m4.006s 00:08:06.315 01:08:28 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.315 01:08:28 accel -- common/autotest_common.sh@10 -- # set +x 00:08:06.315 ************************************ 00:08:06.315 END TEST accel 00:08:06.315 ************************************ 00:08:06.315 01:08:28 -- common/autotest_common.sh@1142 -- # return 0 00:08:06.315 01:08:28 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:06.315 01:08:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:06.315 01:08:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.315 01:08:28 -- common/autotest_common.sh@10 -- # set +x 00:08:06.315 ************************************ 00:08:06.315 START TEST accel_rpc 00:08:06.315 ************************************ 00:08:06.315 01:08:28 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:06.575 * Looking for test storage... 00:08:06.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:08:06.576 01:08:28 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:06.576 01:08:28 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:06.576 01:08:28 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=738019 00:08:06.576 01:08:28 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 738019 00:08:06.576 01:08:28 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 738019 ']' 00:08:06.576 01:08:28 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.576 01:08:28 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:06.576 01:08:28 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.576 01:08:28 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:06.576 01:08:28 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.576 [2024-07-25 01:08:28.904210] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:08:06.576 [2024-07-25 01:08:28.904261] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid738019 ] 00:08:06.576 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.576 [2024-07-25 01:08:28.953972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.576 [2024-07-25 01:08:29.034151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.516 01:08:29 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:07.516 01:08:29 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:07.516 01:08:29 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:07.516 01:08:29 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:07.516 01:08:29 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:07.516 01:08:29 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:07.516 01:08:29 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:07.516 01:08:29 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:07.516 01:08:29 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.516 01:08:29 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.516 ************************************ 00:08:07.516 START TEST accel_assign_opcode 00:08:07.516 ************************************ 00:08:07.516 01:08:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:08:07.516 01:08:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:07.516 01:08:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.516 01:08:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:07.516 [2024-07-25 01:08:29.744244] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:07.516 01:08:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.516 01:08:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:07.516 01:08:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.516 01:08:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:07.516 [2024-07-25 01:08:29.752265] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:07.516 01:08:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.516 01:08:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:07.516 01:08:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.516 01:08:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:07.516 01:08:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.516 01:08:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:07.516 01:08:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.516 01:08:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:07.516 01:08:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:07.516 01:08:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:08:07.516 01:08:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.516 software 00:08:07.516 00:08:07.516 real 0m0.228s 00:08:07.516 user 0m0.045s 00:08:07.516 sys 0m0.009s 00:08:07.516 01:08:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:07.516 01:08:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:07.516 ************************************ 00:08:07.516 END TEST accel_assign_opcode 00:08:07.516 ************************************ 00:08:07.516 01:08:30 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:07.516 01:08:30 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 738019 00:08:07.516 01:08:30 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 738019 ']' 00:08:07.516 01:08:30 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 738019 00:08:07.516 01:08:30 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:08:07.516 01:08:30 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:07.776 01:08:30 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 738019 00:08:07.776 01:08:30 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:07.776 01:08:30 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:07.776 01:08:30 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 738019' 00:08:07.776 killing process with pid 738019 00:08:07.776 01:08:30 accel_rpc -- common/autotest_common.sh@967 -- # kill 738019 00:08:07.776 01:08:30 accel_rpc -- common/autotest_common.sh@972 -- # wait 738019 00:08:08.037 00:08:08.037 real 0m1.578s 00:08:08.037 user 0m1.660s 00:08:08.037 sys 0m0.407s 00:08:08.037 01:08:30 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.037 01:08:30 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.037 ************************************ 00:08:08.037 END TEST accel_rpc 00:08:08.037 ************************************ 00:08:08.037 01:08:30 -- common/autotest_common.sh@1142 -- # return 0 00:08:08.037 01:08:30 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:08.037 01:08:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:08.037 01:08:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.037 01:08:30 -- common/autotest_common.sh@10 -- # set +x 00:08:08.037 ************************************ 00:08:08.037 START TEST app_cmdline 00:08:08.037 ************************************ 00:08:08.037 01:08:30 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:08.037 * Looking for test storage... 00:08:08.037 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:08.037 01:08:30 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:08.037 01:08:30 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=738420 00:08:08.037 01:08:30 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:08.037 01:08:30 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 738420 00:08:08.037 01:08:30 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 738420 ']' 00:08:08.037 01:08:30 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.037 01:08:30 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:08.037 01:08:30 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.037 01:08:30 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:08.037 01:08:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:08.298 [2024-07-25 01:08:30.548365] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:08:08.298 [2024-07-25 01:08:30.548411] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid738420 ] 00:08:08.298 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.298 [2024-07-25 01:08:30.601793] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.298 [2024-07-25 01:08:30.675269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.869 01:08:31 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:08.869 01:08:31 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:08:08.869 01:08:31 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:09.128 { 00:08:09.128 "version": "SPDK v24.09-pre git sha1 3c25cfe1d", 00:08:09.128 "fields": { 00:08:09.128 "major": 24, 00:08:09.128 "minor": 9, 00:08:09.128 "patch": 0, 00:08:09.128 "suffix": "-pre", 00:08:09.128 "commit": "3c25cfe1d" 00:08:09.128 } 00:08:09.128 } 00:08:09.128 01:08:31 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:09.128 01:08:31 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:09.128 01:08:31 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:09.128 01:08:31 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:09.128 01:08:31 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:09.128 01:08:31 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:09.128 01:08:31 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:09.128 01:08:31 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.128 01:08:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:09.128 01:08:31 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.128 01:08:31 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:09.128 01:08:31 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:09.128 01:08:31 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:09.128 01:08:31 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:08:09.128 01:08:31 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:09.128 01:08:31 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:09.128 01:08:31 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:09.128 01:08:31 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:09.128 01:08:31 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:09.128 01:08:31 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:09.128 01:08:31 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:09.128 01:08:31 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:09.128 01:08:31 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:09.128 01:08:31 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:09.389 request: 00:08:09.389 { 00:08:09.389 "method": "env_dpdk_get_mem_stats", 00:08:09.389 "req_id": 1 00:08:09.389 } 00:08:09.389 Got JSON-RPC error response 00:08:09.389 response: 00:08:09.389 { 00:08:09.389 "code": -32601, 00:08:09.389 "message": "Method not found" 00:08:09.389 } 00:08:09.389 01:08:31 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:08:09.389 01:08:31 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:09.389 01:08:31 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:09.389 01:08:31 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:09.389 01:08:31 app_cmdline -- app/cmdline.sh@1 -- # killprocess 738420 00:08:09.389 01:08:31 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 738420 ']' 00:08:09.389 01:08:31 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 738420 00:08:09.389 01:08:31 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:08:09.389 01:08:31 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:09.389 01:08:31 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 738420 00:08:09.389 01:08:31 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:09.389 01:08:31 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:09.389 01:08:31 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 738420' 00:08:09.389 killing process with pid 738420 00:08:09.389 01:08:31 app_cmdline -- common/autotest_common.sh@967 -- # kill 738420 00:08:09.389 01:08:31 app_cmdline -- common/autotest_common.sh@972 -- # wait 738420 00:08:09.649 00:08:09.649 real 0m1.682s 00:08:09.649 user 0m2.019s 00:08:09.649 sys 0m0.421s 00:08:09.649 01:08:32 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.649 01:08:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:09.649 ************************************ 00:08:09.649 END TEST app_cmdline 00:08:09.649 ************************************ 00:08:09.649 01:08:32 -- common/autotest_common.sh@1142 -- # return 0 00:08:09.649 01:08:32 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:09.649 01:08:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:09.649 01:08:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.649 01:08:32 -- common/autotest_common.sh@10 -- # set +x 00:08:09.910 ************************************ 00:08:09.910 START TEST version 00:08:09.910 ************************************ 00:08:09.910 01:08:32 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:09.910 * Looking for test storage... 00:08:09.910 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:09.910 01:08:32 version -- app/version.sh@17 -- # get_header_version major 00:08:09.910 01:08:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:09.910 01:08:32 version -- app/version.sh@14 -- # cut -f2 00:08:09.910 01:08:32 version -- app/version.sh@14 -- # tr -d '"' 00:08:09.910 01:08:32 version -- app/version.sh@17 -- # major=24 00:08:09.910 01:08:32 version -- app/version.sh@18 -- # get_header_version minor 00:08:09.910 01:08:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:09.910 01:08:32 version -- app/version.sh@14 -- # cut -f2 00:08:09.910 01:08:32 version -- app/version.sh@14 -- # tr -d '"' 00:08:09.910 01:08:32 version -- app/version.sh@18 -- # minor=9 00:08:09.910 01:08:32 version -- app/version.sh@19 -- # get_header_version patch 00:08:09.910 01:08:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:09.910 01:08:32 version -- app/version.sh@14 -- # cut -f2 00:08:09.910 01:08:32 version -- app/version.sh@14 -- # tr -d '"' 00:08:09.910 01:08:32 version -- app/version.sh@19 -- # patch=0 00:08:09.910 01:08:32 version -- app/version.sh@20 -- # get_header_version suffix 00:08:09.910 01:08:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:09.910 01:08:32 version -- app/version.sh@14 -- # cut -f2 00:08:09.910 01:08:32 version -- app/version.sh@14 -- # tr -d '"' 00:08:09.910 01:08:32 version -- app/version.sh@20 -- # suffix=-pre 00:08:09.910 01:08:32 version -- app/version.sh@22 -- # version=24.9 00:08:09.910 01:08:32 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:09.910 01:08:32 version -- app/version.sh@28 -- # version=24.9rc0 00:08:09.910 01:08:32 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:09.910 01:08:32 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:09.910 01:08:32 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:09.910 01:08:32 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:09.910 00:08:09.910 real 0m0.138s 00:08:09.910 user 0m0.068s 00:08:09.910 sys 0m0.107s 00:08:09.910 01:08:32 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.910 01:08:32 version -- common/autotest_common.sh@10 -- # set +x 00:08:09.910 ************************************ 00:08:09.910 END TEST version 00:08:09.911 ************************************ 00:08:09.911 01:08:32 -- common/autotest_common.sh@1142 -- # return 0 00:08:09.911 01:08:32 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:08:09.911 01:08:32 -- spdk/autotest.sh@198 -- # uname -s 00:08:09.911 01:08:32 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:08:09.911 01:08:32 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:09.911 01:08:32 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:09.911 01:08:32 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:08:09.911 01:08:32 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:09.911 01:08:32 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:09.911 01:08:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:09.911 01:08:32 -- common/autotest_common.sh@10 -- # set +x 00:08:09.911 01:08:32 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:09.911 01:08:32 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:08:09.911 01:08:32 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:08:09.911 01:08:32 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:08:09.911 01:08:32 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:08:09.911 01:08:32 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:08:09.911 01:08:32 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:09.911 01:08:32 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:09.911 01:08:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.911 01:08:32 -- common/autotest_common.sh@10 -- # set +x 00:08:09.911 ************************************ 00:08:09.911 START TEST nvmf_tcp 00:08:09.911 ************************************ 00:08:09.911 01:08:32 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:10.172 * Looking for test storage... 00:08:10.172 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:10.172 01:08:32 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:10.172 01:08:32 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:10.172 01:08:32 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:10.172 01:08:32 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:08:10.172 01:08:32 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:10.172 01:08:32 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:10.172 01:08:32 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:10.172 01:08:32 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:10.172 01:08:32 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:10.172 01:08:32 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:10.172 01:08:32 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:10.172 01:08:32 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:10.172 01:08:32 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:10.172 01:08:32 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:10.172 01:08:32 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:10.172 01:08:32 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:10.172 01:08:32 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:10.172 01:08:32 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:10.172 01:08:32 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:10.172 01:08:32 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:10.172 01:08:32 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:10.172 01:08:32 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:10.172 01:08:32 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:10.172 01:08:32 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:10.172 01:08:32 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.172 01:08:32 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.172 01:08:32 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.172 01:08:32 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:08:10.172 01:08:32 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.172 01:08:32 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:08:10.172 01:08:32 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:10.172 01:08:32 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:10.172 01:08:32 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:10.173 01:08:32 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:10.173 01:08:32 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:10.173 01:08:32 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:10.173 01:08:32 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:10.173 01:08:32 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:10.173 01:08:32 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:10.173 01:08:32 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:10.173 01:08:32 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:10.173 01:08:32 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:10.173 01:08:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:10.173 01:08:32 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:08:10.173 01:08:32 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:10.173 01:08:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:10.173 01:08:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.173 01:08:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:10.173 ************************************ 00:08:10.173 START TEST nvmf_example 00:08:10.173 ************************************ 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:10.173 * Looking for test storage... 00:08:10.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:08:10.173 01:08:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:15.466 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:15.466 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:15.466 Found net devices under 0000:86:00.0: cvl_0_0 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:15.466 Found net devices under 0000:86:00.1: cvl_0_1 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:15.466 01:08:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:15.727 01:08:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:15.727 01:08:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:15.727 01:08:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:15.727 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:15.727 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:08:15.727 00:08:15.727 --- 10.0.0.2 ping statistics --- 00:08:15.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.727 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:08:15.727 01:08:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:15.727 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:15.727 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.374 ms 00:08:15.727 00:08:15.727 --- 10.0.0.1 ping statistics --- 00:08:15.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.727 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:08:15.727 01:08:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:15.727 01:08:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:08:15.727 01:08:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:15.727 01:08:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:15.727 01:08:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:15.727 01:08:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:15.727 01:08:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:15.727 01:08:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:15.727 01:08:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:15.727 01:08:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:15.727 01:08:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:15.727 01:08:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:15.727 01:08:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:15.727 01:08:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:08:15.727 01:08:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:08:15.727 01:08:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:15.727 01:08:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=741821 00:08:15.727 01:08:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:15.727 01:08:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 741821 00:08:15.727 01:08:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 741821 ']' 00:08:15.727 01:08:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.727 01:08:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:15.727 01:08:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.727 01:08:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:15.727 01:08:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:15.727 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.669 01:08:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:16.669 01:08:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:08:16.669 01:08:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:16.669 01:08:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:16.669 01:08:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:16.669 01:08:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:16.669 01:08:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.669 01:08:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:16.669 01:08:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.669 01:08:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:16.669 01:08:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.669 01:08:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:16.669 01:08:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.669 01:08:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:16.669 01:08:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:16.669 01:08:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.669 01:08:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:16.669 01:08:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.669 01:08:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:16.669 01:08:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:16.669 01:08:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.669 01:08:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:16.669 01:08:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.669 01:08:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:16.669 01:08:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.669 01:08:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:16.669 01:08:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.669 01:08:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:16.669 01:08:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:16.669 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.898 Initializing NVMe Controllers 00:08:28.898 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:28.898 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:28.898 Initialization complete. Launching workers. 00:08:28.898 ======================================================== 00:08:28.898 Latency(us) 00:08:28.898 Device Information : IOPS MiB/s Average min max 00:08:28.898 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13344.06 52.13 4795.74 565.05 16778.12 00:08:28.898 ======================================================== 00:08:28.898 Total : 13344.06 52.13 4795.74 565.05 16778.12 00:08:28.898 00:08:28.898 01:08:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:28.898 01:08:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:28.898 01:08:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:28.898 01:08:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:08:28.898 01:08:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:28.898 01:08:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:08:28.898 01:08:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:28.898 01:08:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:28.898 rmmod nvme_tcp 00:08:28.898 rmmod nvme_fabrics 00:08:28.898 rmmod nvme_keyring 00:08:28.898 01:08:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:28.898 01:08:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:08:28.898 01:08:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:08:28.898 01:08:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 741821 ']' 00:08:28.898 01:08:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 741821 00:08:28.898 01:08:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 741821 ']' 00:08:28.898 01:08:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 741821 00:08:28.898 01:08:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:08:28.898 01:08:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:28.898 01:08:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 741821 00:08:28.898 01:08:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:08:28.898 01:08:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:08:28.898 01:08:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 741821' 00:08:28.898 killing process with pid 741821 00:08:28.898 01:08:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 741821 00:08:28.898 01:08:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 741821 00:08:28.898 nvmf threads initialize successfully 00:08:28.898 bdev subsystem init successfully 00:08:28.898 created a nvmf target service 00:08:28.898 create targets's poll groups done 00:08:28.898 all subsystems of target started 00:08:28.898 nvmf target is running 00:08:28.898 all subsystems of target stopped 00:08:28.898 destroy targets's poll groups done 00:08:28.898 destroyed the nvmf target service 00:08:28.898 bdev subsystem finish successfully 00:08:28.898 nvmf threads destroy successfully 00:08:28.898 01:08:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:28.898 01:08:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:28.898 01:08:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:28.898 01:08:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:28.898 01:08:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:28.898 01:08:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.898 01:08:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:28.898 01:08:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.159 01:08:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:29.159 01:08:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:29.159 01:08:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:29.159 01:08:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:29.159 00:08:29.159 real 0m19.076s 00:08:29.159 user 0m45.719s 00:08:29.159 sys 0m5.332s 00:08:29.159 01:08:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:29.159 01:08:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:29.159 ************************************ 00:08:29.159 END TEST nvmf_example 00:08:29.159 ************************************ 00:08:29.159 01:08:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:29.159 01:08:51 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:29.159 01:08:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:29.159 01:08:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.159 01:08:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:29.422 ************************************ 00:08:29.422 START TEST nvmf_filesystem 00:08:29.422 ************************************ 00:08:29.422 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:29.422 * Looking for test storage... 00:08:29.422 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:29.422 01:08:51 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:08:29.422 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:29.422 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:08:29.422 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:29.422 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:29.422 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:08:29.422 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:08:29.422 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:29.422 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:08:29.422 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:29.422 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:29.422 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:29.422 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:29.422 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:29.422 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:29.422 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:29.422 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:29.422 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:29.422 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:29.422 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:29.422 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:29.422 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:29.422 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:29.422 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:29.422 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:29.422 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:29.422 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:08:29.423 01:08:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:29.423 #define SPDK_CONFIG_H 00:08:29.423 #define SPDK_CONFIG_APPS 1 00:08:29.423 #define SPDK_CONFIG_ARCH native 00:08:29.423 #undef SPDK_CONFIG_ASAN 00:08:29.423 #undef SPDK_CONFIG_AVAHI 00:08:29.423 #undef SPDK_CONFIG_CET 00:08:29.423 #define SPDK_CONFIG_COVERAGE 1 00:08:29.423 #define SPDK_CONFIG_CROSS_PREFIX 00:08:29.423 #undef SPDK_CONFIG_CRYPTO 00:08:29.423 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:29.423 #undef SPDK_CONFIG_CUSTOMOCF 00:08:29.423 #undef SPDK_CONFIG_DAOS 00:08:29.423 #define SPDK_CONFIG_DAOS_DIR 00:08:29.423 #define SPDK_CONFIG_DEBUG 1 00:08:29.423 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:29.423 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:08:29.423 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:29.423 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:29.423 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:29.423 #undef SPDK_CONFIG_DPDK_UADK 00:08:29.423 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:29.423 #define SPDK_CONFIG_EXAMPLES 1 00:08:29.423 #undef SPDK_CONFIG_FC 00:08:29.423 #define SPDK_CONFIG_FC_PATH 00:08:29.423 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:29.423 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:29.423 #undef SPDK_CONFIG_FUSE 00:08:29.423 #undef SPDK_CONFIG_FUZZER 00:08:29.423 #define SPDK_CONFIG_FUZZER_LIB 00:08:29.423 #undef SPDK_CONFIG_GOLANG 00:08:29.423 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:29.423 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:29.423 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:29.423 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:29.423 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:29.423 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:29.423 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:29.423 #define SPDK_CONFIG_IDXD 1 00:08:29.423 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:29.423 #undef SPDK_CONFIG_IPSEC_MB 00:08:29.423 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:29.423 #define SPDK_CONFIG_ISAL 1 00:08:29.423 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:29.423 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:29.423 #define SPDK_CONFIG_LIBDIR 00:08:29.423 #undef SPDK_CONFIG_LTO 00:08:29.423 #define SPDK_CONFIG_MAX_LCORES 128 00:08:29.423 #define SPDK_CONFIG_NVME_CUSE 1 00:08:29.423 #undef SPDK_CONFIG_OCF 00:08:29.423 #define SPDK_CONFIG_OCF_PATH 00:08:29.423 #define SPDK_CONFIG_OPENSSL_PATH 00:08:29.423 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:29.423 #define SPDK_CONFIG_PGO_DIR 00:08:29.423 #undef SPDK_CONFIG_PGO_USE 00:08:29.423 #define SPDK_CONFIG_PREFIX /usr/local 00:08:29.423 #undef SPDK_CONFIG_RAID5F 00:08:29.423 #undef SPDK_CONFIG_RBD 00:08:29.423 #define SPDK_CONFIG_RDMA 1 00:08:29.424 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:29.424 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:29.424 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:29.424 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:29.424 #define SPDK_CONFIG_SHARED 1 00:08:29.424 #undef SPDK_CONFIG_SMA 00:08:29.424 #define SPDK_CONFIG_TESTS 1 00:08:29.424 #undef SPDK_CONFIG_TSAN 00:08:29.424 #define SPDK_CONFIG_UBLK 1 00:08:29.424 #define SPDK_CONFIG_UBSAN 1 00:08:29.424 #undef SPDK_CONFIG_UNIT_TESTS 00:08:29.424 #undef SPDK_CONFIG_URING 00:08:29.424 #define SPDK_CONFIG_URING_PATH 00:08:29.424 #undef SPDK_CONFIG_URING_ZNS 00:08:29.424 #undef SPDK_CONFIG_USDT 00:08:29.424 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:29.424 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:29.424 #define SPDK_CONFIG_VFIO_USER 1 00:08:29.424 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:29.424 #define SPDK_CONFIG_VHOST 1 00:08:29.424 #define SPDK_CONFIG_VIRTIO 1 00:08:29.424 #undef SPDK_CONFIG_VTUNE 00:08:29.424 #define SPDK_CONFIG_VTUNE_DIR 00:08:29.424 #define SPDK_CONFIG_WERROR 1 00:08:29.424 #define SPDK_CONFIG_WPDK_DIR 00:08:29.424 #undef SPDK_CONFIG_XNVME 00:08:29.424 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:08:29.424 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:29.425 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j96 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 744234 ]] 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 744234 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.KZ1XP3 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.KZ1XP3/tests/target /tmp/spdk.KZ1XP3 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=950202368 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4334227456 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=185215864832 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=195974283264 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=10758418432 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97931505664 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987141632 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=39185477632 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=39194857472 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9379840 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97984466944 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987141632 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=2674688 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=19597422592 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=19597426688 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:08:29.426 * Looking for test storage... 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=185215864832 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:08:29.426 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=12973010944 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:29.427 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:29.427 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.428 01:08:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:29.428 01:08:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:29.428 01:08:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:29.428 01:08:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:34.763 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:34.763 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:34.763 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:34.763 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:34.763 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:34.763 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:34.763 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:34.763 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:34.763 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:34.763 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:08:34.763 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:34.763 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:08:34.763 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:34.763 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:08:34.763 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:34.763 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:34.763 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:34.763 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:34.763 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:34.763 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:34.763 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:34.763 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:34.763 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:34.763 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:34.763 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:34.763 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:34.763 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:34.763 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:34.763 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:34.763 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:34.763 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:34.763 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:34.763 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:34.763 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:34.763 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:34.763 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:34.763 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:34.763 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:34.763 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:34.763 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:34.763 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:34.763 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:34.763 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:34.763 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:34.763 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:34.764 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:34.764 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:34.764 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:34.764 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:34.764 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:34.764 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:34.764 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:34.764 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.764 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:34.764 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:34.764 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:34.764 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:34.764 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.764 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:34.764 Found net devices under 0000:86:00.0: cvl_0_0 00:08:34.764 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.764 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:34.764 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.764 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:34.764 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:34.764 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:34.764 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:34.764 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.764 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:34.764 Found net devices under 0000:86:00.1: cvl_0_1 00:08:34.764 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.764 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:34.764 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:34.764 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:34.764 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:34.764 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:34.764 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:34.764 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:34.764 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:34.764 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:34.764 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:34.764 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:34.764 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:34.764 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:34.764 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:34.764 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:34.764 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:34.764 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:34.764 01:08:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:34.764 01:08:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:34.764 01:08:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:34.764 01:08:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:34.764 01:08:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:34.764 01:08:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:34.764 01:08:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:34.764 01:08:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:34.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:34.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:08:34.764 00:08:34.764 --- 10.0.0.2 ping statistics --- 00:08:34.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.764 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:08:34.764 01:08:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:34.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:34.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.378 ms 00:08:34.764 00:08:34.764 --- 10.0.0.1 ping statistics --- 00:08:34.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.764 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:08:34.764 01:08:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:34.764 01:08:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:08:34.764 01:08:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:34.764 01:08:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:34.764 01:08:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:34.764 01:08:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:34.764 01:08:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:34.764 01:08:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:34.764 01:08:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:34.764 01:08:57 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:34.764 01:08:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:34.764 01:08:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:34.764 01:08:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:34.764 ************************************ 00:08:34.764 START TEST nvmf_filesystem_no_in_capsule 00:08:34.764 ************************************ 00:08:34.764 01:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:08:34.764 01:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:08:34.764 01:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:34.764 01:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:34.764 01:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:34.764 01:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:34.764 01:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=747258 00:08:34.764 01:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 747258 00:08:34.765 01:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 747258 ']' 00:08:34.765 01:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.765 01:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:34.765 01:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.765 01:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:34.765 01:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:34.765 01:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:34.765 [2024-07-25 01:08:57.241514] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:08:34.765 [2024-07-25 01:08:57.241554] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:35.025 EAL: No free 2048 kB hugepages reported on node 1 00:08:35.025 [2024-07-25 01:08:57.297278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:35.025 [2024-07-25 01:08:57.380322] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:35.025 [2024-07-25 01:08:57.380357] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:35.025 [2024-07-25 01:08:57.380364] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:35.025 [2024-07-25 01:08:57.380370] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:35.025 [2024-07-25 01:08:57.380375] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:35.025 [2024-07-25 01:08:57.380648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.025 [2024-07-25 01:08:57.380665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:35.025 [2024-07-25 01:08:57.380751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:35.025 [2024-07-25 01:08:57.380753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.596 01:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:35.596 01:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:35.596 01:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:35.596 01:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:35.596 01:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:35.856 01:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:35.856 01:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:35.856 01:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:35.856 01:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.856 01:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:35.856 [2024-07-25 01:08:58.099026] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:35.856 01:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.856 01:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:35.856 01:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.856 01:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:35.856 Malloc1 00:08:35.856 01:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.856 01:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:35.856 01:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.856 01:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:35.856 01:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.856 01:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:35.856 01:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.856 01:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:35.856 01:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.856 01:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:35.856 01:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.856 01:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:35.856 [2024-07-25 01:08:58.246805] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:35.856 01:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.856 01:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:35.856 01:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:35.856 01:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:35.856 01:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:35.856 01:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:35.856 01:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:35.856 01:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.856 01:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:35.856 01:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.856 01:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:35.856 { 00:08:35.856 "name": "Malloc1", 00:08:35.856 "aliases": [ 00:08:35.856 "724eb3f9-ebe0-4b9c-8ff1-3fe413496ac7" 00:08:35.856 ], 00:08:35.856 "product_name": "Malloc disk", 00:08:35.856 "block_size": 512, 00:08:35.856 "num_blocks": 1048576, 00:08:35.856 "uuid": "724eb3f9-ebe0-4b9c-8ff1-3fe413496ac7", 00:08:35.856 "assigned_rate_limits": { 00:08:35.856 "rw_ios_per_sec": 0, 00:08:35.857 "rw_mbytes_per_sec": 0, 00:08:35.857 "r_mbytes_per_sec": 0, 00:08:35.857 "w_mbytes_per_sec": 0 00:08:35.857 }, 00:08:35.857 "claimed": true, 00:08:35.857 "claim_type": "exclusive_write", 00:08:35.857 "zoned": false, 00:08:35.857 "supported_io_types": { 00:08:35.857 "read": true, 00:08:35.857 "write": true, 00:08:35.857 "unmap": true, 00:08:35.857 "flush": true, 00:08:35.857 "reset": true, 00:08:35.857 "nvme_admin": false, 00:08:35.857 "nvme_io": false, 00:08:35.857 "nvme_io_md": false, 00:08:35.857 "write_zeroes": true, 00:08:35.857 "zcopy": true, 00:08:35.857 "get_zone_info": false, 00:08:35.857 "zone_management": false, 00:08:35.857 "zone_append": false, 00:08:35.857 "compare": false, 00:08:35.857 "compare_and_write": false, 00:08:35.857 "abort": true, 00:08:35.857 "seek_hole": false, 00:08:35.857 "seek_data": false, 00:08:35.857 "copy": true, 00:08:35.857 "nvme_iov_md": false 00:08:35.857 }, 00:08:35.857 "memory_domains": [ 00:08:35.857 { 00:08:35.857 "dma_device_id": "system", 00:08:35.857 "dma_device_type": 1 00:08:35.857 }, 00:08:35.857 { 00:08:35.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.857 "dma_device_type": 2 00:08:35.857 } 00:08:35.857 ], 00:08:35.857 "driver_specific": {} 00:08:35.857 } 00:08:35.857 ]' 00:08:35.857 01:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:35.857 01:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:35.857 01:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:36.117 01:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:36.117 01:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:36.117 01:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:36.117 01:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:36.117 01:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:37.056 01:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:37.056 01:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:37.056 01:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:37.056 01:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:37.056 01:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:39.596 01:09:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:39.596 01:09:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:39.596 01:09:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:39.596 01:09:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:39.596 01:09:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:39.596 01:09:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:39.596 01:09:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:39.596 01:09:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:39.596 01:09:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:39.596 01:09:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:39.596 01:09:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:39.596 01:09:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:39.596 01:09:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:39.596 01:09:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:39.596 01:09:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:39.596 01:09:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:39.596 01:09:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:39.596 01:09:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:39.596 01:09:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:40.979 01:09:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:40.979 01:09:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:40.979 01:09:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:40.979 01:09:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:40.979 01:09:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:40.979 ************************************ 00:08:40.979 START TEST filesystem_ext4 00:08:40.979 ************************************ 00:08:40.979 01:09:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:40.979 01:09:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:40.979 01:09:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:40.979 01:09:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:40.979 01:09:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:40.979 01:09:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:40.979 01:09:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:40.979 01:09:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:40.979 01:09:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:40.979 01:09:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:40.979 01:09:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:40.979 mke2fs 1.46.5 (30-Dec-2021) 00:08:40.979 Discarding device blocks: 0/522240 done 00:08:40.979 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:40.979 Filesystem UUID: 56b3a95e-2d24-4cb1-9db1-9e1f489c8cbc 00:08:40.979 Superblock backups stored on blocks: 00:08:40.979 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:40.979 00:08:40.979 Allocating group tables: 0/64 done 00:08:40.979 Writing inode tables: 0/64 done 00:08:41.919 Creating journal (8192 blocks): done 00:08:41.919 Writing superblocks and filesystem accounting information: 0/64 done 00:08:41.919 00:08:41.919 01:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:41.919 01:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:41.919 01:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:41.919 01:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:08:41.919 01:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:41.919 01:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:08:41.919 01:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:41.919 01:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:41.919 01:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 747258 00:08:41.919 01:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:41.919 01:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:41.919 01:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:41.919 01:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:41.919 00:08:41.919 real 0m1.282s 00:08:41.919 user 0m0.017s 00:08:41.919 sys 0m0.050s 00:08:41.919 01:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:41.919 01:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:41.919 ************************************ 00:08:41.919 END TEST filesystem_ext4 00:08:41.919 ************************************ 00:08:42.180 01:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:42.180 01:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:42.180 01:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:42.180 01:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.180 01:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:42.180 ************************************ 00:08:42.180 START TEST filesystem_btrfs 00:08:42.180 ************************************ 00:08:42.180 01:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:42.180 01:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:42.180 01:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:42.180 01:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:42.180 01:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:42.180 01:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:42.180 01:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:42.180 01:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:42.180 01:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:42.180 01:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:42.180 01:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:42.440 btrfs-progs v6.6.2 00:08:42.440 See https://btrfs.readthedocs.io for more information. 00:08:42.440 00:08:42.440 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:42.440 NOTE: several default settings have changed in version 5.15, please make sure 00:08:42.440 this does not affect your deployments: 00:08:42.440 - DUP for metadata (-m dup) 00:08:42.440 - enabled no-holes (-O no-holes) 00:08:42.440 - enabled free-space-tree (-R free-space-tree) 00:08:42.440 00:08:42.440 Label: (null) 00:08:42.440 UUID: 09c25f59-3275-4fee-bc42-576d1a78a689 00:08:42.440 Node size: 16384 00:08:42.440 Sector size: 4096 00:08:42.440 Filesystem size: 510.00MiB 00:08:42.440 Block group profiles: 00:08:42.440 Data: single 8.00MiB 00:08:42.440 Metadata: DUP 32.00MiB 00:08:42.440 System: DUP 8.00MiB 00:08:42.440 SSD detected: yes 00:08:42.440 Zoned device: no 00:08:42.440 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:42.440 Runtime features: free-space-tree 00:08:42.440 Checksum: crc32c 00:08:42.440 Number of devices: 1 00:08:42.440 Devices: 00:08:42.440 ID SIZE PATH 00:08:42.440 1 510.00MiB /dev/nvme0n1p1 00:08:42.440 00:08:42.440 01:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:42.440 01:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:42.700 01:09:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:42.700 01:09:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:08:42.700 01:09:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:42.700 01:09:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:08:42.700 01:09:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:42.700 01:09:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:42.700 01:09:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 747258 00:08:42.700 01:09:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:42.701 01:09:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:42.701 01:09:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:42.701 01:09:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:42.701 00:08:42.701 real 0m0.646s 00:08:42.701 user 0m0.017s 00:08:42.701 sys 0m0.063s 00:08:42.701 01:09:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:42.701 01:09:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:42.701 ************************************ 00:08:42.701 END TEST filesystem_btrfs 00:08:42.701 ************************************ 00:08:42.701 01:09:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:42.701 01:09:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:42.701 01:09:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:42.701 01:09:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.701 01:09:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:42.701 ************************************ 00:08:42.701 START TEST filesystem_xfs 00:08:42.701 ************************************ 00:08:42.701 01:09:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:42.701 01:09:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:42.701 01:09:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:42.701 01:09:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:42.701 01:09:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:42.701 01:09:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:42.701 01:09:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:42.701 01:09:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:08:42.701 01:09:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:42.960 01:09:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:42.960 01:09:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:42.960 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:42.960 = sectsz=512 attr=2, projid32bit=1 00:08:42.960 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:42.960 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:42.960 data = bsize=4096 blocks=130560, imaxpct=25 00:08:42.960 = sunit=0 swidth=0 blks 00:08:42.960 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:42.960 log =internal log bsize=4096 blocks=16384, version=2 00:08:42.960 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:42.960 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:43.899 Discarding blocks...Done. 00:08:43.899 01:09:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:43.900 01:09:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:46.440 01:09:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:46.440 01:09:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:46.441 01:09:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:46.441 01:09:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:46.441 01:09:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:46.441 01:09:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:46.441 01:09:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 747258 00:08:46.441 01:09:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:46.441 01:09:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:46.441 01:09:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:46.441 01:09:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:46.441 00:08:46.441 real 0m3.486s 00:08:46.441 user 0m0.024s 00:08:46.441 sys 0m0.050s 00:08:46.441 01:09:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:46.441 01:09:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:46.441 ************************************ 00:08:46.441 END TEST filesystem_xfs 00:08:46.441 ************************************ 00:08:46.441 01:09:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:46.441 01:09:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:46.700 01:09:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:46.700 01:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:46.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.700 01:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:46.700 01:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:46.700 01:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:46.700 01:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:46.700 01:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:46.700 01:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:46.700 01:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:46.700 01:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:46.700 01:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.700 01:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:46.700 01:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.700 01:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:46.700 01:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 747258 00:08:46.700 01:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 747258 ']' 00:08:46.700 01:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 747258 00:08:46.700 01:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:46.700 01:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:46.700 01:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 747258 00:08:46.701 01:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:46.701 01:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:46.701 01:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 747258' 00:08:46.701 killing process with pid 747258 00:08:46.701 01:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 747258 00:08:46.701 01:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 747258 00:08:47.269 01:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:47.269 00:08:47.269 real 0m12.273s 00:08:47.269 user 0m48.160s 00:08:47.269 sys 0m1.115s 00:08:47.269 01:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:47.269 01:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:47.269 ************************************ 00:08:47.269 END TEST nvmf_filesystem_no_in_capsule 00:08:47.269 ************************************ 00:08:47.269 01:09:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:47.269 01:09:09 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:47.269 01:09:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:47.269 01:09:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:47.269 01:09:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:47.269 ************************************ 00:08:47.269 START TEST nvmf_filesystem_in_capsule 00:08:47.269 ************************************ 00:08:47.269 01:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:08:47.269 01:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:47.270 01:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:47.270 01:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:47.270 01:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:47.270 01:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:47.270 01:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=749976 00:08:47.270 01:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 749976 00:08:47.270 01:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:47.270 01:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 749976 ']' 00:08:47.270 01:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.270 01:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:47.270 01:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.270 01:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:47.270 01:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:47.270 [2024-07-25 01:09:09.590960] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:08:47.270 [2024-07-25 01:09:09.591003] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:47.270 EAL: No free 2048 kB hugepages reported on node 1 00:08:47.270 [2024-07-25 01:09:09.649196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:47.270 [2024-07-25 01:09:09.722809] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:47.270 [2024-07-25 01:09:09.722852] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:47.270 [2024-07-25 01:09:09.722860] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:47.270 [2024-07-25 01:09:09.722867] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:47.270 [2024-07-25 01:09:09.722873] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:47.270 [2024-07-25 01:09:09.722937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:47.270 [2024-07-25 01:09:09.723032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:47.270 [2024-07-25 01:09:09.723122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:47.270 [2024-07-25 01:09:09.723124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.213 01:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:48.213 01:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:48.213 01:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:48.213 01:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:48.213 01:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:48.213 01:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:48.213 01:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:48.213 01:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:48.213 01:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.213 01:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:48.213 [2024-07-25 01:09:10.443964] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:48.213 01:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.213 01:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:48.213 01:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.213 01:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:48.213 Malloc1 00:08:48.213 01:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.213 01:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:48.213 01:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.213 01:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:48.213 01:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.213 01:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:48.213 01:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.213 01:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:48.213 01:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.213 01:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:48.213 01:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.213 01:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:48.213 [2024-07-25 01:09:10.589235] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:48.213 01:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.213 01:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:48.213 01:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:48.213 01:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:48.213 01:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:48.213 01:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:48.213 01:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:48.213 01:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.213 01:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:48.213 01:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.213 01:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:48.213 { 00:08:48.213 "name": "Malloc1", 00:08:48.213 "aliases": [ 00:08:48.213 "67191ba3-5105-4121-8559-1af8e3cddb77" 00:08:48.213 ], 00:08:48.213 "product_name": "Malloc disk", 00:08:48.213 "block_size": 512, 00:08:48.214 "num_blocks": 1048576, 00:08:48.214 "uuid": "67191ba3-5105-4121-8559-1af8e3cddb77", 00:08:48.214 "assigned_rate_limits": { 00:08:48.214 "rw_ios_per_sec": 0, 00:08:48.214 "rw_mbytes_per_sec": 0, 00:08:48.214 "r_mbytes_per_sec": 0, 00:08:48.214 "w_mbytes_per_sec": 0 00:08:48.214 }, 00:08:48.214 "claimed": true, 00:08:48.214 "claim_type": "exclusive_write", 00:08:48.214 "zoned": false, 00:08:48.214 "supported_io_types": { 00:08:48.214 "read": true, 00:08:48.214 "write": true, 00:08:48.214 "unmap": true, 00:08:48.214 "flush": true, 00:08:48.214 "reset": true, 00:08:48.214 "nvme_admin": false, 00:08:48.214 "nvme_io": false, 00:08:48.214 "nvme_io_md": false, 00:08:48.214 "write_zeroes": true, 00:08:48.214 "zcopy": true, 00:08:48.214 "get_zone_info": false, 00:08:48.214 "zone_management": false, 00:08:48.214 "zone_append": false, 00:08:48.214 "compare": false, 00:08:48.214 "compare_and_write": false, 00:08:48.214 "abort": true, 00:08:48.214 "seek_hole": false, 00:08:48.214 "seek_data": false, 00:08:48.214 "copy": true, 00:08:48.214 "nvme_iov_md": false 00:08:48.214 }, 00:08:48.214 "memory_domains": [ 00:08:48.214 { 00:08:48.214 "dma_device_id": "system", 00:08:48.214 "dma_device_type": 1 00:08:48.214 }, 00:08:48.214 { 00:08:48.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.214 "dma_device_type": 2 00:08:48.214 } 00:08:48.214 ], 00:08:48.214 "driver_specific": {} 00:08:48.214 } 00:08:48.214 ]' 00:08:48.214 01:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:48.214 01:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:48.214 01:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:48.214 01:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:48.214 01:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:48.214 01:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:48.474 01:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:48.474 01:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:49.412 01:09:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:49.412 01:09:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:49.412 01:09:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:49.412 01:09:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:49.412 01:09:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:51.360 01:09:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:51.360 01:09:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:51.360 01:09:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:51.360 01:09:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:51.360 01:09:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:51.360 01:09:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:51.360 01:09:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:51.360 01:09:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:51.360 01:09:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:51.360 01:09:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:51.360 01:09:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:51.360 01:09:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:51.360 01:09:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:51.360 01:09:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:51.360 01:09:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:51.360 01:09:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:51.360 01:09:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:51.929 01:09:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:52.869 01:09:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:53.808 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:53.808 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:53.808 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:53.808 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:53.808 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:53.808 ************************************ 00:08:53.808 START TEST filesystem_in_capsule_ext4 00:08:53.808 ************************************ 00:08:53.808 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:53.808 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:53.808 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:53.808 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:53.808 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:53.808 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:53.808 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:53.808 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:53.808 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:53.808 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:53.808 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:53.808 mke2fs 1.46.5 (30-Dec-2021) 00:08:53.808 Discarding device blocks: 0/522240 done 00:08:53.808 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:53.808 Filesystem UUID: 721e9595-cbc1-4d42-8168-91b139c8900e 00:08:53.808 Superblock backups stored on blocks: 00:08:53.808 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:53.808 00:08:53.808 Allocating group tables: 0/64 done 00:08:53.808 Writing inode tables: 0/64 done 00:08:54.068 Creating journal (8192 blocks): done 00:08:54.068 Writing superblocks and filesystem accounting information: 0/64 done 00:08:54.068 00:08:54.068 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:54.068 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:54.328 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:54.328 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:54.328 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:54.328 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:54.328 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:54.328 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:54.328 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 749976 00:08:54.328 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:54.328 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:54.328 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:54.328 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:54.328 00:08:54.328 real 0m0.688s 00:08:54.328 user 0m0.018s 00:08:54.328 sys 0m0.048s 00:08:54.328 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:54.328 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:54.328 ************************************ 00:08:54.328 END TEST filesystem_in_capsule_ext4 00:08:54.328 ************************************ 00:08:54.328 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:54.328 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:54.328 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:54.328 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:54.328 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:54.328 ************************************ 00:08:54.328 START TEST filesystem_in_capsule_btrfs 00:08:54.328 ************************************ 00:08:54.328 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:54.328 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:54.328 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:54.328 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:54.328 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:54.328 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:54.328 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:54.328 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:54.328 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:54.328 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:54.328 01:09:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:54.897 btrfs-progs v6.6.2 00:08:54.897 See https://btrfs.readthedocs.io for more information. 00:08:54.897 00:08:54.897 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:54.897 NOTE: several default settings have changed in version 5.15, please make sure 00:08:54.897 this does not affect your deployments: 00:08:54.897 - DUP for metadata (-m dup) 00:08:54.897 - enabled no-holes (-O no-holes) 00:08:54.897 - enabled free-space-tree (-R free-space-tree) 00:08:54.897 00:08:54.897 Label: (null) 00:08:54.897 UUID: 65894ce3-16c8-4cc0-9316-5e69cb636cc5 00:08:54.897 Node size: 16384 00:08:54.897 Sector size: 4096 00:08:54.897 Filesystem size: 510.00MiB 00:08:54.897 Block group profiles: 00:08:54.897 Data: single 8.00MiB 00:08:54.897 Metadata: DUP 32.00MiB 00:08:54.897 System: DUP 8.00MiB 00:08:54.897 SSD detected: yes 00:08:54.897 Zoned device: no 00:08:54.897 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:54.897 Runtime features: free-space-tree 00:08:54.897 Checksum: crc32c 00:08:54.897 Number of devices: 1 00:08:54.897 Devices: 00:08:54.897 ID SIZE PATH 00:08:54.897 1 510.00MiB /dev/nvme0n1p1 00:08:54.897 00:08:54.897 01:09:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:54.897 01:09:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:55.157 01:09:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:55.157 01:09:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:55.157 01:09:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:55.157 01:09:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:55.157 01:09:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:55.157 01:09:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:55.157 01:09:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 749976 00:08:55.157 01:09:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:55.157 01:09:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:55.157 01:09:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:55.157 01:09:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:55.157 00:08:55.157 real 0m0.715s 00:08:55.157 user 0m0.021s 00:08:55.157 sys 0m0.063s 00:08:55.157 01:09:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:55.157 01:09:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:55.157 ************************************ 00:08:55.157 END TEST filesystem_in_capsule_btrfs 00:08:55.157 ************************************ 00:08:55.157 01:09:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:55.157 01:09:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:55.157 01:09:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:55.157 01:09:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:55.157 01:09:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:55.157 ************************************ 00:08:55.157 START TEST filesystem_in_capsule_xfs 00:08:55.157 ************************************ 00:08:55.157 01:09:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:55.157 01:09:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:55.157 01:09:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:55.157 01:09:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:55.157 01:09:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:55.157 01:09:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:55.157 01:09:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:55.157 01:09:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:08:55.157 01:09:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:55.157 01:09:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:55.158 01:09:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:55.418 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:55.418 = sectsz=512 attr=2, projid32bit=1 00:08:55.418 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:55.418 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:55.418 data = bsize=4096 blocks=130560, imaxpct=25 00:08:55.418 = sunit=0 swidth=0 blks 00:08:55.418 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:55.418 log =internal log bsize=4096 blocks=16384, version=2 00:08:55.418 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:55.418 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:55.987 Discarding blocks...Done. 00:08:55.987 01:09:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:55.987 01:09:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:58.558 01:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:58.558 01:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:58.558 01:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:58.558 01:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:58.558 01:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:58.558 01:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:58.558 01:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 749976 00:08:58.558 01:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:58.558 01:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:58.558 01:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:58.558 01:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:58.558 00:08:58.558 real 0m2.930s 00:08:58.558 user 0m0.025s 00:08:58.558 sys 0m0.047s 00:08:58.558 01:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:58.558 01:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:58.558 ************************************ 00:08:58.558 END TEST filesystem_in_capsule_xfs 00:08:58.558 ************************************ 00:08:58.558 01:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:58.558 01:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:58.558 01:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:58.558 01:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:58.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.558 01:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:58.558 01:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:58.558 01:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:58.558 01:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:58.558 01:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:58.558 01:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:58.558 01:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:58.558 01:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:58.558 01:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.558 01:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:58.558 01:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.558 01:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:58.558 01:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 749976 00:08:58.558 01:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 749976 ']' 00:08:58.558 01:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 749976 00:08:58.558 01:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:58.558 01:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:58.558 01:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 749976 00:08:58.558 01:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:58.558 01:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:58.558 01:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 749976' 00:08:58.558 killing process with pid 749976 00:08:58.558 01:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 749976 00:08:58.558 01:09:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 749976 00:08:58.819 01:09:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:58.819 00:08:58.819 real 0m11.603s 00:08:58.819 user 0m45.564s 00:08:58.819 sys 0m1.084s 00:08:58.819 01:09:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:58.819 01:09:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:58.819 ************************************ 00:08:58.819 END TEST nvmf_filesystem_in_capsule 00:08:58.819 ************************************ 00:08:58.819 01:09:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:58.819 01:09:21 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:58.819 01:09:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:58.819 01:09:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:58.819 01:09:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:58.819 01:09:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:58.819 01:09:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:58.819 01:09:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:58.819 rmmod nvme_tcp 00:08:58.819 rmmod nvme_fabrics 00:08:58.819 rmmod nvme_keyring 00:08:58.819 01:09:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:58.819 01:09:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:58.819 01:09:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:58.819 01:09:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:58.819 01:09:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:58.819 01:09:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:58.819 01:09:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:58.819 01:09:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:58.819 01:09:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:58.819 01:09:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.819 01:09:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:58.819 01:09:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.361 01:09:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:01.361 00:09:01.361 real 0m31.634s 00:09:01.361 user 1m35.246s 00:09:01.361 sys 0m6.404s 00:09:01.361 01:09:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:01.361 01:09:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:01.361 ************************************ 00:09:01.361 END TEST nvmf_filesystem 00:09:01.361 ************************************ 00:09:01.361 01:09:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:01.361 01:09:23 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:01.361 01:09:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:01.361 01:09:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.361 01:09:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:01.361 ************************************ 00:09:01.361 START TEST nvmf_target_discovery 00:09:01.361 ************************************ 00:09:01.361 01:09:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:01.361 * Looking for test storage... 00:09:01.361 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:01.361 01:09:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:01.361 01:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:09:01.361 01:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.361 01:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.361 01:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.361 01:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.361 01:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.361 01:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.361 01:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.361 01:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.361 01:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.361 01:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.362 01:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:01.362 01:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:01.362 01:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.362 01:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.362 01:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:01.362 01:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:01.362 01:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:01.362 01:09:23 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.362 01:09:23 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.362 01:09:23 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.362 01:09:23 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.362 01:09:23 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.362 01:09:23 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.362 01:09:23 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:09:01.362 01:09:23 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.362 01:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:09:01.362 01:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:01.362 01:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:01.362 01:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:01.362 01:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.362 01:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.362 01:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:01.362 01:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:01.362 01:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:01.362 01:09:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:09:01.362 01:09:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:09:01.362 01:09:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:09:01.362 01:09:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:09:01.362 01:09:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:09:01.362 01:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:01.362 01:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:01.362 01:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:01.362 01:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:01.362 01:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:01.362 01:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.362 01:09:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:01.362 01:09:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.362 01:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:01.362 01:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:01.362 01:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:09:01.362 01:09:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:06.649 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:06.649 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:06.649 Found net devices under 0000:86:00.0: cvl_0_0 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.649 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:06.649 Found net devices under 0000:86:00.1: cvl_0_1 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:06.650 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:06.650 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:09:06.650 00:09:06.650 --- 10.0.0.2 ping statistics --- 00:09:06.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.650 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:06.650 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:06.650 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:09:06.650 00:09:06.650 --- 10.0.0.1 ping statistics --- 00:09:06.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.650 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=755645 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 755645 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 755645 ']' 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:06.650 01:09:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.650 [2024-07-25 01:09:28.903757] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:06.650 [2024-07-25 01:09:28.903798] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:06.650 EAL: No free 2048 kB hugepages reported on node 1 00:09:06.650 [2024-07-25 01:09:28.959572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:06.650 [2024-07-25 01:09:29.039409] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:06.650 [2024-07-25 01:09:29.039448] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:06.650 [2024-07-25 01:09:29.039454] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:06.650 [2024-07-25 01:09:29.039460] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:06.650 [2024-07-25 01:09:29.039465] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:06.650 [2024-07-25 01:09:29.039534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.650 [2024-07-25 01:09:29.039629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:06.650 [2024-07-25 01:09:29.039719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:06.650 [2024-07-25 01:09:29.039720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.591 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:07.591 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:09:07.591 01:09:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:07.591 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:07.591 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:07.591 01:09:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:07.592 [2024-07-25 01:09:29.759154] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:07.592 Null1 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:07.592 [2024-07-25 01:09:29.804623] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:07.592 Null2 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:07.592 Null3 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:07.592 Null4 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.592 01:09:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:09:07.592 00:09:07.592 Discovery Log Number of Records 6, Generation counter 6 00:09:07.592 =====Discovery Log Entry 0====== 00:09:07.592 trtype: tcp 00:09:07.592 adrfam: ipv4 00:09:07.592 subtype: current discovery subsystem 00:09:07.592 treq: not required 00:09:07.592 portid: 0 00:09:07.592 trsvcid: 4420 00:09:07.592 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:07.592 traddr: 10.0.0.2 00:09:07.592 eflags: explicit discovery connections, duplicate discovery information 00:09:07.592 sectype: none 00:09:07.592 =====Discovery Log Entry 1====== 00:09:07.592 trtype: tcp 00:09:07.592 adrfam: ipv4 00:09:07.592 subtype: nvme subsystem 00:09:07.592 treq: not required 00:09:07.592 portid: 0 00:09:07.592 trsvcid: 4420 00:09:07.592 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:07.592 traddr: 10.0.0.2 00:09:07.592 eflags: none 00:09:07.592 sectype: none 00:09:07.592 =====Discovery Log Entry 2====== 00:09:07.592 trtype: tcp 00:09:07.592 adrfam: ipv4 00:09:07.592 subtype: nvme subsystem 00:09:07.592 treq: not required 00:09:07.592 portid: 0 00:09:07.592 trsvcid: 4420 00:09:07.592 subnqn: nqn.2016-06.io.spdk:cnode2 00:09:07.592 traddr: 10.0.0.2 00:09:07.592 eflags: none 00:09:07.592 sectype: none 00:09:07.592 =====Discovery Log Entry 3====== 00:09:07.592 trtype: tcp 00:09:07.592 adrfam: ipv4 00:09:07.592 subtype: nvme subsystem 00:09:07.592 treq: not required 00:09:07.592 portid: 0 00:09:07.592 trsvcid: 4420 00:09:07.592 subnqn: nqn.2016-06.io.spdk:cnode3 00:09:07.592 traddr: 10.0.0.2 00:09:07.592 eflags: none 00:09:07.592 sectype: none 00:09:07.592 =====Discovery Log Entry 4====== 00:09:07.592 trtype: tcp 00:09:07.592 adrfam: ipv4 00:09:07.592 subtype: nvme subsystem 00:09:07.592 treq: not required 00:09:07.592 portid: 0 00:09:07.592 trsvcid: 4420 00:09:07.593 subnqn: nqn.2016-06.io.spdk:cnode4 00:09:07.593 traddr: 10.0.0.2 00:09:07.593 eflags: none 00:09:07.593 sectype: none 00:09:07.593 =====Discovery Log Entry 5====== 00:09:07.593 trtype: tcp 00:09:07.593 adrfam: ipv4 00:09:07.593 subtype: discovery subsystem referral 00:09:07.593 treq: not required 00:09:07.593 portid: 0 00:09:07.593 trsvcid: 4430 00:09:07.593 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:07.593 traddr: 10.0.0.2 00:09:07.593 eflags: none 00:09:07.593 sectype: none 00:09:07.593 01:09:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:09:07.593 Perform nvmf subsystem discovery via RPC 00:09:07.593 01:09:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:09:07.593 01:09:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.593 01:09:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:07.853 [ 00:09:07.853 { 00:09:07.853 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:07.853 "subtype": "Discovery", 00:09:07.853 "listen_addresses": [ 00:09:07.853 { 00:09:07.853 "trtype": "TCP", 00:09:07.853 "adrfam": "IPv4", 00:09:07.853 "traddr": "10.0.0.2", 00:09:07.853 "trsvcid": "4420" 00:09:07.853 } 00:09:07.853 ], 00:09:07.853 "allow_any_host": true, 00:09:07.853 "hosts": [] 00:09:07.853 }, 00:09:07.853 { 00:09:07.853 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:07.853 "subtype": "NVMe", 00:09:07.853 "listen_addresses": [ 00:09:07.853 { 00:09:07.853 "trtype": "TCP", 00:09:07.853 "adrfam": "IPv4", 00:09:07.853 "traddr": "10.0.0.2", 00:09:07.853 "trsvcid": "4420" 00:09:07.853 } 00:09:07.853 ], 00:09:07.853 "allow_any_host": true, 00:09:07.853 "hosts": [], 00:09:07.853 "serial_number": "SPDK00000000000001", 00:09:07.853 "model_number": "SPDK bdev Controller", 00:09:07.853 "max_namespaces": 32, 00:09:07.853 "min_cntlid": 1, 00:09:07.853 "max_cntlid": 65519, 00:09:07.853 "namespaces": [ 00:09:07.853 { 00:09:07.853 "nsid": 1, 00:09:07.853 "bdev_name": "Null1", 00:09:07.853 "name": "Null1", 00:09:07.853 "nguid": "4D9C1484D3CD41A1B5260713C4E39352", 00:09:07.853 "uuid": "4d9c1484-d3cd-41a1-b526-0713c4e39352" 00:09:07.853 } 00:09:07.853 ] 00:09:07.853 }, 00:09:07.853 { 00:09:07.853 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:07.853 "subtype": "NVMe", 00:09:07.853 "listen_addresses": [ 00:09:07.853 { 00:09:07.853 "trtype": "TCP", 00:09:07.853 "adrfam": "IPv4", 00:09:07.853 "traddr": "10.0.0.2", 00:09:07.853 "trsvcid": "4420" 00:09:07.853 } 00:09:07.853 ], 00:09:07.853 "allow_any_host": true, 00:09:07.853 "hosts": [], 00:09:07.853 "serial_number": "SPDK00000000000002", 00:09:07.853 "model_number": "SPDK bdev Controller", 00:09:07.853 "max_namespaces": 32, 00:09:07.854 "min_cntlid": 1, 00:09:07.854 "max_cntlid": 65519, 00:09:07.854 "namespaces": [ 00:09:07.854 { 00:09:07.854 "nsid": 1, 00:09:07.854 "bdev_name": "Null2", 00:09:07.854 "name": "Null2", 00:09:07.854 "nguid": "9577AB337D6748D7A413F10F7F2D5563", 00:09:07.854 "uuid": "9577ab33-7d67-48d7-a413-f10f7f2d5563" 00:09:07.854 } 00:09:07.854 ] 00:09:07.854 }, 00:09:07.854 { 00:09:07.854 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:09:07.854 "subtype": "NVMe", 00:09:07.854 "listen_addresses": [ 00:09:07.854 { 00:09:07.854 "trtype": "TCP", 00:09:07.854 "adrfam": "IPv4", 00:09:07.854 "traddr": "10.0.0.2", 00:09:07.854 "trsvcid": "4420" 00:09:07.854 } 00:09:07.854 ], 00:09:07.854 "allow_any_host": true, 00:09:07.854 "hosts": [], 00:09:07.854 "serial_number": "SPDK00000000000003", 00:09:07.854 "model_number": "SPDK bdev Controller", 00:09:07.854 "max_namespaces": 32, 00:09:07.854 "min_cntlid": 1, 00:09:07.854 "max_cntlid": 65519, 00:09:07.854 "namespaces": [ 00:09:07.854 { 00:09:07.854 "nsid": 1, 00:09:07.854 "bdev_name": "Null3", 00:09:07.854 "name": "Null3", 00:09:07.854 "nguid": "94015646D8564059962538082A87A747", 00:09:07.854 "uuid": "94015646-d856-4059-9625-38082a87a747" 00:09:07.854 } 00:09:07.854 ] 00:09:07.854 }, 00:09:07.854 { 00:09:07.854 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:09:07.854 "subtype": "NVMe", 00:09:07.854 "listen_addresses": [ 00:09:07.854 { 00:09:07.854 "trtype": "TCP", 00:09:07.854 "adrfam": "IPv4", 00:09:07.854 "traddr": "10.0.0.2", 00:09:07.854 "trsvcid": "4420" 00:09:07.854 } 00:09:07.854 ], 00:09:07.854 "allow_any_host": true, 00:09:07.854 "hosts": [], 00:09:07.854 "serial_number": "SPDK00000000000004", 00:09:07.854 "model_number": "SPDK bdev Controller", 00:09:07.854 "max_namespaces": 32, 00:09:07.854 "min_cntlid": 1, 00:09:07.854 "max_cntlid": 65519, 00:09:07.854 "namespaces": [ 00:09:07.854 { 00:09:07.854 "nsid": 1, 00:09:07.854 "bdev_name": "Null4", 00:09:07.854 "name": "Null4", 00:09:07.854 "nguid": "AD9406C7DE424CA6AB96F2802C36D871", 00:09:07.854 "uuid": "ad9406c7-de42-4ca6-ab96-f2802c36d871" 00:09:07.854 } 00:09:07.854 ] 00:09:07.854 } 00:09:07.854 ] 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:07.854 rmmod nvme_tcp 00:09:07.854 rmmod nvme_fabrics 00:09:07.854 rmmod nvme_keyring 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 755645 ']' 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 755645 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 755645 ']' 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 755645 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 755645 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 755645' 00:09:07.854 killing process with pid 755645 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 755645 00:09:07.854 01:09:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 755645 00:09:08.114 01:09:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:08.115 01:09:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:08.115 01:09:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:08.115 01:09:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:08.115 01:09:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:08.115 01:09:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.115 01:09:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:08.115 01:09:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.655 01:09:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:10.655 00:09:10.655 real 0m9.195s 00:09:10.655 user 0m7.522s 00:09:10.655 sys 0m4.344s 00:09:10.655 01:09:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:10.655 01:09:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:10.655 ************************************ 00:09:10.655 END TEST nvmf_target_discovery 00:09:10.655 ************************************ 00:09:10.655 01:09:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:10.655 01:09:32 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:10.655 01:09:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:10.655 01:09:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:10.655 01:09:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:10.655 ************************************ 00:09:10.655 START TEST nvmf_referrals 00:09:10.655 ************************************ 00:09:10.655 01:09:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:10.655 * Looking for test storage... 00:09:10.655 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:10.655 01:09:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:10.655 01:09:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:09:10.655 01:09:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:10.655 01:09:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:10.655 01:09:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:10.655 01:09:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:10.655 01:09:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:10.655 01:09:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:10.655 01:09:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:10.655 01:09:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:10.655 01:09:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:10.655 01:09:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:10.656 01:09:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:10.656 01:09:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:10.656 01:09:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:10.656 01:09:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:10.656 01:09:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:10.656 01:09:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:10.656 01:09:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:10.656 01:09:32 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:10.656 01:09:32 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:10.656 01:09:32 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:10.656 01:09:32 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.656 01:09:32 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.656 01:09:32 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.656 01:09:32 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:09:10.656 01:09:32 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.656 01:09:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:09:10.656 01:09:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:10.656 01:09:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:10.656 01:09:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:10.656 01:09:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:10.656 01:09:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:10.656 01:09:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:10.656 01:09:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:10.656 01:09:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:10.656 01:09:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:10.656 01:09:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:10.656 01:09:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:10.656 01:09:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:10.656 01:09:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:10.656 01:09:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:10.656 01:09:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:09:10.656 01:09:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:10.656 01:09:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:10.656 01:09:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:10.656 01:09:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:10.656 01:09:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:10.656 01:09:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.656 01:09:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:10.656 01:09:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.656 01:09:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:10.656 01:09:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:10.656 01:09:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:09:10.656 01:09:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:16.115 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:16.115 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:09:16.115 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:16.115 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:16.115 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:16.116 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:16.116 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:16.116 Found net devices under 0000:86:00.0: cvl_0_0 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:16.116 Found net devices under 0000:86:00.1: cvl_0_1 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:16.116 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:16.116 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:09:16.116 00:09:16.116 --- 10.0.0.2 ping statistics --- 00:09:16.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.116 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:16.116 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:16.116 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.500 ms 00:09:16.116 00:09:16.116 --- 10.0.0.1 ping statistics --- 00:09:16.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.116 rtt min/avg/max/mdev = 0.500/0.500/0.500/0.000 ms 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=759204 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 759204 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:16.116 01:09:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 759204 ']' 00:09:16.117 01:09:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.117 01:09:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:16.117 01:09:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.117 01:09:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:16.117 01:09:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:16.117 [2024-07-25 01:09:37.905648] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:16.117 [2024-07-25 01:09:37.905695] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:16.117 EAL: No free 2048 kB hugepages reported on node 1 00:09:16.117 [2024-07-25 01:09:37.965574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:16.117 [2024-07-25 01:09:38.052375] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:16.117 [2024-07-25 01:09:38.052409] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:16.117 [2024-07-25 01:09:38.052416] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:16.117 [2024-07-25 01:09:38.052422] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:16.117 [2024-07-25 01:09:38.052427] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:16.117 [2024-07-25 01:09:38.052468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.117 [2024-07-25 01:09:38.052484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:16.117 [2024-07-25 01:09:38.052574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:16.117 [2024-07-25 01:09:38.052575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.378 01:09:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:16.378 01:09:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:09:16.378 01:09:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:16.378 01:09:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:16.378 01:09:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:16.378 01:09:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:16.378 01:09:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:16.378 01:09:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.378 01:09:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:16.378 [2024-07-25 01:09:38.764104] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:16.378 01:09:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.378 01:09:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:09:16.378 01:09:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.378 01:09:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:16.378 [2024-07-25 01:09:38.777481] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:09:16.378 01:09:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.378 01:09:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:09:16.378 01:09:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.378 01:09:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:16.378 01:09:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.378 01:09:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:09:16.378 01:09:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.378 01:09:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:16.378 01:09:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.378 01:09:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:09:16.378 01:09:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.378 01:09:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:16.378 01:09:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.378 01:09:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:16.378 01:09:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:09:16.378 01:09:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.378 01:09:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:16.378 01:09:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.378 01:09:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:16.378 01:09:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:16.378 01:09:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:16.378 01:09:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:16.378 01:09:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:16.378 01:09:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.378 01:09:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:16.378 01:09:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:16.378 01:09:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.639 01:09:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:16.639 01:09:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:16.639 01:09:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:16.639 01:09:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:16.639 01:09:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:16.639 01:09:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:16.639 01:09:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:16.639 01:09:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:16.639 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:16.639 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:16.639 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:09:16.639 01:09:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.639 01:09:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:16.639 01:09:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.639 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:09:16.639 01:09:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.639 01:09:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:16.639 01:09:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.639 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:09:16.639 01:09:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.639 01:09:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:16.639 01:09:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.639 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:16.639 01:09:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.639 01:09:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:16.639 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:09:16.639 01:09:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.899 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:16.899 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:16.899 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:16.899 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:16.899 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:16.899 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:16.899 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:16.899 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:16.899 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:16.899 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:09:16.899 01:09:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.899 01:09:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:16.899 01:09:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.899 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:16.899 01:09:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.899 01:09:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:16.899 01:09:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.899 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:16.899 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:16.899 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:16.899 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:16.899 01:09:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.899 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:16.899 01:09:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:16.899 01:09:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.899 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:16.899 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:16.899 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:16.899 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:16.899 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:16.899 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:16.899 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:16.899 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:17.160 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:17.160 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:17.160 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:17.160 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:17.160 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:17.160 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:17.160 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:17.160 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:17.160 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:17.160 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:17.160 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:17.160 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:17.160 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:17.419 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:17.419 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:17.419 01:09:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.420 01:09:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:17.420 01:09:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.420 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:17.420 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:17.420 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:17.420 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:17.420 01:09:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.420 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:17.420 01:09:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:17.420 01:09:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.420 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:17.420 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:17.420 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:17.420 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:17.420 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:17.420 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:17.420 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:17.420 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:17.420 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:17.420 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:17.420 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:17.420 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:17.420 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:17.420 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:17.420 01:09:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:17.680 01:09:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:17.680 01:09:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:17.680 01:09:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:17.680 01:09:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:17.680 01:09:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:17.680 01:09:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:17.680 01:09:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:17.680 01:09:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:17.680 01:09:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.680 01:09:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:17.680 01:09:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.680 01:09:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:17.680 01:09:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:09:17.680 01:09:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.680 01:09:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:17.680 01:09:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.680 01:09:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:17.680 01:09:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:17.680 01:09:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:17.680 01:09:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:17.680 01:09:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:17.680 01:09:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:17.680 01:09:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:17.940 01:09:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:17.940 01:09:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:17.940 01:09:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:17.940 01:09:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:09:17.940 01:09:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:17.940 01:09:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:09:17.940 01:09:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:17.940 01:09:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:09:17.940 01:09:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:17.940 01:09:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:17.940 rmmod nvme_tcp 00:09:17.940 rmmod nvme_fabrics 00:09:17.940 rmmod nvme_keyring 00:09:17.940 01:09:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:17.940 01:09:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:09:17.940 01:09:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:09:17.940 01:09:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 759204 ']' 00:09:17.940 01:09:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 759204 00:09:17.940 01:09:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 759204 ']' 00:09:17.940 01:09:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 759204 00:09:17.940 01:09:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:09:17.940 01:09:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:17.940 01:09:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 759204 00:09:17.940 01:09:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:17.940 01:09:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:17.940 01:09:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 759204' 00:09:17.940 killing process with pid 759204 00:09:17.940 01:09:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 759204 00:09:17.940 01:09:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 759204 00:09:18.201 01:09:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:18.201 01:09:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:18.201 01:09:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:18.201 01:09:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:18.201 01:09:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:18.201 01:09:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.201 01:09:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:18.201 01:09:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.745 01:09:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:20.745 00:09:20.745 real 0m9.995s 00:09:20.745 user 0m12.176s 00:09:20.745 sys 0m4.366s 00:09:20.745 01:09:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:20.745 01:09:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:20.745 ************************************ 00:09:20.745 END TEST nvmf_referrals 00:09:20.745 ************************************ 00:09:20.745 01:09:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:20.745 01:09:42 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:20.745 01:09:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:20.745 01:09:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:20.745 01:09:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:20.745 ************************************ 00:09:20.745 START TEST nvmf_connect_disconnect 00:09:20.745 ************************************ 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:20.745 * Looking for test storage... 00:09:20.745 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:09:20.745 01:09:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:26.030 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:26.030 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:09:26.030 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:26.030 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:26.030 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:26.030 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:26.030 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:26.030 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:09:26.030 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:26.030 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:09:26.030 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:09:26.030 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:09:26.030 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:09:26.030 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:09:26.030 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:09:26.030 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:26.030 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:26.030 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:26.030 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:26.030 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:26.030 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:26.030 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:26.030 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:26.030 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:26.030 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:26.030 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:26.031 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:26.031 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:26.031 Found net devices under 0000:86:00.0: cvl_0_0 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:26.031 Found net devices under 0000:86:00.1: cvl_0_1 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:26.031 01:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:26.031 01:09:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:26.031 01:09:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:26.031 01:09:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:26.031 01:09:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:26.031 01:09:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:26.031 01:09:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:26.031 01:09:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:26.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:26.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:09:26.031 00:09:26.031 --- 10.0.0.2 ping statistics --- 00:09:26.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.031 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:09:26.031 01:09:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:26.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:26.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.386 ms 00:09:26.031 00:09:26.031 --- 10.0.0.1 ping statistics --- 00:09:26.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.031 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:09:26.031 01:09:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:26.031 01:09:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:09:26.031 01:09:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:26.031 01:09:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:26.031 01:09:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:26.031 01:09:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:26.031 01:09:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:26.031 01:09:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:26.031 01:09:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:26.031 01:09:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:26.031 01:09:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:26.031 01:09:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:26.031 01:09:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:26.031 01:09:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:26.031 01:09:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=763258 00:09:26.031 01:09:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 763258 00:09:26.031 01:09:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 763258 ']' 00:09:26.031 01:09:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.031 01:09:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:26.031 01:09:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.031 01:09:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:26.031 01:09:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:26.031 [2024-07-25 01:09:48.230351] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:26.031 [2024-07-25 01:09:48.230393] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:26.031 EAL: No free 2048 kB hugepages reported on node 1 00:09:26.031 [2024-07-25 01:09:48.286952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:26.031 [2024-07-25 01:09:48.367600] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:26.031 [2024-07-25 01:09:48.367639] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:26.031 [2024-07-25 01:09:48.367646] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:26.031 [2024-07-25 01:09:48.367652] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:26.031 [2024-07-25 01:09:48.367657] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:26.031 [2024-07-25 01:09:48.367877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.031 [2024-07-25 01:09:48.367959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:26.031 [2024-07-25 01:09:48.368172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:26.032 [2024-07-25 01:09:48.368174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.602 01:09:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:26.602 01:09:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:09:26.602 01:09:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:26.602 01:09:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:26.602 01:09:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:26.602 01:09:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:26.602 01:09:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:26.602 01:09:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.602 01:09:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:26.602 [2024-07-25 01:09:49.094999] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:26.862 01:09:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.862 01:09:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:26.862 01:09:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.862 01:09:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:26.862 01:09:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.862 01:09:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:26.862 01:09:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:26.862 01:09:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.862 01:09:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:26.862 01:09:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.862 01:09:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:26.862 01:09:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.862 01:09:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:26.862 01:09:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.862 01:09:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:26.862 01:09:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.862 01:09:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:26.862 [2024-07-25 01:09:49.146848] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:26.862 01:09:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.862 01:09:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:09:26.862 01:09:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:09:26.862 01:09:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:09:30.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.751 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.044 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.339 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.339 01:10:05 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:09:43.339 01:10:05 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:09:43.339 01:10:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:43.339 01:10:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:09:43.339 01:10:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:43.339 01:10:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:09:43.339 01:10:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:43.339 01:10:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:43.339 rmmod nvme_tcp 00:09:43.339 rmmod nvme_fabrics 00:09:43.339 rmmod nvme_keyring 00:09:43.339 01:10:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:43.339 01:10:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:09:43.339 01:10:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:09:43.339 01:10:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 763258 ']' 00:09:43.339 01:10:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 763258 00:09:43.339 01:10:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 763258 ']' 00:09:43.339 01:10:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 763258 00:09:43.339 01:10:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:09:43.339 01:10:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:43.339 01:10:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 763258 00:09:43.339 01:10:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:43.339 01:10:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:43.339 01:10:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 763258' 00:09:43.339 killing process with pid 763258 00:09:43.339 01:10:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 763258 00:09:43.339 01:10:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 763258 00:09:43.339 01:10:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:43.339 01:10:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:43.339 01:10:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:43.339 01:10:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:43.339 01:10:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:43.339 01:10:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.339 01:10:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:43.339 01:10:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.879 01:10:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:45.879 00:09:45.879 real 0m25.145s 00:09:45.879 user 1m10.705s 00:09:45.879 sys 0m5.068s 00:09:45.879 01:10:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:45.879 01:10:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:45.879 ************************************ 00:09:45.879 END TEST nvmf_connect_disconnect 00:09:45.879 ************************************ 00:09:45.879 01:10:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:45.879 01:10:07 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:45.879 01:10:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:45.879 01:10:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:45.879 01:10:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:45.879 ************************************ 00:09:45.879 START TEST nvmf_multitarget 00:09:45.879 ************************************ 00:09:45.879 01:10:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:45.879 * Looking for test storage... 00:09:45.879 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:45.879 01:10:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:45.879 01:10:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:09:45.879 01:10:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.879 01:10:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.879 01:10:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.879 01:10:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.879 01:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.879 01:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.879 01:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.879 01:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.879 01:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.879 01:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:45.879 01:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:45.879 01:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:45.879 01:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:45.879 01:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:45.879 01:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:45.879 01:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:45.879 01:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:45.879 01:10:08 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.879 01:10:08 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.879 01:10:08 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.879 01:10:08 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.879 01:10:08 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.879 01:10:08 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.879 01:10:08 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:09:45.880 01:10:08 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.880 01:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:09:45.880 01:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:45.880 01:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:45.880 01:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:45.880 01:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:45.880 01:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:45.880 01:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:45.880 01:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:45.880 01:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:45.880 01:10:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:45.880 01:10:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:09:45.880 01:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:45.880 01:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:45.880 01:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:45.880 01:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:45.880 01:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:45.880 01:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.880 01:10:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:45.880 01:10:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.880 01:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:45.880 01:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:45.880 01:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:09:45.880 01:10:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:51.169 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:51.169 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:51.169 Found net devices under 0000:86:00.0: cvl_0_0 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:51.169 Found net devices under 0000:86:00.1: cvl_0_1 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:51.169 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:51.170 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:51.170 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:51.170 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:51.170 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:51.170 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:51.170 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:51.170 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:51.170 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:51.170 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:51.170 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:51.170 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:51.170 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:51.170 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:51.170 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:51.170 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:51.170 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:51.170 01:10:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:51.170 01:10:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:51.170 01:10:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:51.170 01:10:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:51.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:51.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:09:51.170 00:09:51.170 --- 10.0.0.2 ping statistics --- 00:09:51.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.170 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:09:51.170 01:10:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:51.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:51.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:09:51.170 00:09:51.170 --- 10.0.0.1 ping statistics --- 00:09:51.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.170 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:09:51.170 01:10:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:51.170 01:10:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:09:51.170 01:10:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:51.170 01:10:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:51.170 01:10:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:51.170 01:10:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:51.170 01:10:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:51.170 01:10:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:51.170 01:10:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:51.170 01:10:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:09:51.170 01:10:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:51.170 01:10:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:51.170 01:10:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:51.170 01:10:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:51.170 01:10:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=769557 00:09:51.170 01:10:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 769557 00:09:51.170 01:10:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 769557 ']' 00:09:51.170 01:10:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.170 01:10:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:51.170 01:10:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.170 01:10:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:51.170 01:10:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:51.170 [2024-07-25 01:10:13.179422] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:51.170 [2024-07-25 01:10:13.179471] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:51.170 EAL: No free 2048 kB hugepages reported on node 1 00:09:51.170 [2024-07-25 01:10:13.236370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:51.170 [2024-07-25 01:10:13.316478] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:51.170 [2024-07-25 01:10:13.316518] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:51.170 [2024-07-25 01:10:13.316526] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:51.170 [2024-07-25 01:10:13.316532] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:51.170 [2024-07-25 01:10:13.316540] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:51.170 [2024-07-25 01:10:13.316583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.170 [2024-07-25 01:10:13.316683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:51.170 [2024-07-25 01:10:13.316767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:51.170 [2024-07-25 01:10:13.316768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.741 01:10:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:51.741 01:10:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:09:51.741 01:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:51.741 01:10:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:51.741 01:10:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:51.741 01:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:51.742 01:10:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:51.742 01:10:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:51.742 01:10:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:09:51.742 01:10:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:09:51.742 01:10:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:09:51.742 "nvmf_tgt_1" 00:09:52.003 01:10:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:09:52.003 "nvmf_tgt_2" 00:09:52.003 01:10:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:52.003 01:10:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:09:52.003 01:10:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:09:52.003 01:10:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:09:52.264 true 00:09:52.264 01:10:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:09:52.264 true 00:09:52.264 01:10:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:52.264 01:10:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:09:52.525 01:10:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:09:52.525 01:10:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:52.525 01:10:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:09:52.525 01:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:52.525 01:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:09:52.525 01:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:52.525 01:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:09:52.525 01:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:52.525 01:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:52.525 rmmod nvme_tcp 00:09:52.525 rmmod nvme_fabrics 00:09:52.525 rmmod nvme_keyring 00:09:52.525 01:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:52.525 01:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:09:52.525 01:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:09:52.525 01:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 769557 ']' 00:09:52.525 01:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 769557 00:09:52.525 01:10:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 769557 ']' 00:09:52.525 01:10:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 769557 00:09:52.525 01:10:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:09:52.525 01:10:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:52.525 01:10:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 769557 00:09:52.525 01:10:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:52.525 01:10:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:52.525 01:10:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 769557' 00:09:52.525 killing process with pid 769557 00:09:52.525 01:10:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 769557 00:09:52.525 01:10:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 769557 00:09:52.786 01:10:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:52.786 01:10:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:52.786 01:10:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:52.786 01:10:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:52.786 01:10:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:52.786 01:10:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.786 01:10:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:52.786 01:10:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.699 01:10:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:54.699 00:09:54.699 real 0m9.205s 00:09:54.699 user 0m8.929s 00:09:54.699 sys 0m4.390s 00:09:54.699 01:10:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:54.699 01:10:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:54.699 ************************************ 00:09:54.699 END TEST nvmf_multitarget 00:09:54.699 ************************************ 00:09:54.699 01:10:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:54.699 01:10:17 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:54.699 01:10:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:54.699 01:10:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:54.699 01:10:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:54.699 ************************************ 00:09:54.699 START TEST nvmf_rpc 00:09:54.699 ************************************ 00:09:54.699 01:10:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:54.960 * Looking for test storage... 00:09:54.960 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:09:54.960 01:10:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:00.336 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:00.336 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:00.336 Found net devices under 0000:86:00.0: cvl_0_0 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:00.336 Found net devices under 0000:86:00.1: cvl_0_1 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:00.336 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:00.337 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:00.337 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:00.337 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:00.337 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:00.337 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:00.337 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:00.337 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:00.337 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:00.337 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:00.337 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:00.337 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:00.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:00.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:10:00.337 00:10:00.337 --- 10.0.0.2 ping statistics --- 00:10:00.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.337 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:10:00.337 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:00.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:00.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.456 ms 00:10:00.337 00:10:00.337 --- 10.0.0.1 ping statistics --- 00:10:00.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.337 rtt min/avg/max/mdev = 0.456/0.456/0.456/0.000 ms 00:10:00.337 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:00.337 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:10:00.337 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:00.337 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:00.337 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:00.337 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:00.337 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:00.337 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:00.337 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:00.337 01:10:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:10:00.337 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:00.337 01:10:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:00.337 01:10:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:00.337 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=773291 00:10:00.337 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 773291 00:10:00.337 01:10:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:00.337 01:10:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 773291 ']' 00:10:00.337 01:10:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.337 01:10:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:00.337 01:10:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.337 01:10:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:00.337 01:10:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:00.337 [2024-07-25 01:10:22.782028] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:10:00.337 [2024-07-25 01:10:22.782076] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:00.337 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.597 [2024-07-25 01:10:22.839281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:00.597 [2024-07-25 01:10:22.920169] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:00.597 [2024-07-25 01:10:22.920205] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:00.597 [2024-07-25 01:10:22.920212] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:00.597 [2024-07-25 01:10:22.920222] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:00.597 [2024-07-25 01:10:22.920227] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:00.597 [2024-07-25 01:10:22.920268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.597 [2024-07-25 01:10:22.920365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:00.597 [2024-07-25 01:10:22.920451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:00.597 [2024-07-25 01:10:22.920452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.167 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:01.167 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:10:01.167 01:10:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:01.167 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:01.167 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.167 01:10:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:01.167 01:10:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:10:01.167 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.167 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.167 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.167 01:10:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:10:01.167 "tick_rate": 2300000000, 00:10:01.167 "poll_groups": [ 00:10:01.167 { 00:10:01.167 "name": "nvmf_tgt_poll_group_000", 00:10:01.167 "admin_qpairs": 0, 00:10:01.167 "io_qpairs": 0, 00:10:01.167 "current_admin_qpairs": 0, 00:10:01.167 "current_io_qpairs": 0, 00:10:01.167 "pending_bdev_io": 0, 00:10:01.167 "completed_nvme_io": 0, 00:10:01.167 "transports": [] 00:10:01.167 }, 00:10:01.167 { 00:10:01.167 "name": "nvmf_tgt_poll_group_001", 00:10:01.167 "admin_qpairs": 0, 00:10:01.167 "io_qpairs": 0, 00:10:01.167 "current_admin_qpairs": 0, 00:10:01.167 "current_io_qpairs": 0, 00:10:01.167 "pending_bdev_io": 0, 00:10:01.167 "completed_nvme_io": 0, 00:10:01.167 "transports": [] 00:10:01.167 }, 00:10:01.167 { 00:10:01.167 "name": "nvmf_tgt_poll_group_002", 00:10:01.167 "admin_qpairs": 0, 00:10:01.167 "io_qpairs": 0, 00:10:01.167 "current_admin_qpairs": 0, 00:10:01.167 "current_io_qpairs": 0, 00:10:01.167 "pending_bdev_io": 0, 00:10:01.167 "completed_nvme_io": 0, 00:10:01.167 "transports": [] 00:10:01.167 }, 00:10:01.167 { 00:10:01.168 "name": "nvmf_tgt_poll_group_003", 00:10:01.168 "admin_qpairs": 0, 00:10:01.168 "io_qpairs": 0, 00:10:01.168 "current_admin_qpairs": 0, 00:10:01.168 "current_io_qpairs": 0, 00:10:01.168 "pending_bdev_io": 0, 00:10:01.168 "completed_nvme_io": 0, 00:10:01.168 "transports": [] 00:10:01.168 } 00:10:01.168 ] 00:10:01.168 }' 00:10:01.168 01:10:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:10:01.168 01:10:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:10:01.168 01:10:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:10:01.168 01:10:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:01.428 01:10:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:10:01.428 01:10:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:10:01.428 01:10:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:10:01.428 01:10:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:01.428 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.428 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.428 [2024-07-25 01:10:23.739335] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:01.428 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.428 01:10:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:10:01.428 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.428 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.428 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.428 01:10:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:10:01.428 "tick_rate": 2300000000, 00:10:01.428 "poll_groups": [ 00:10:01.428 { 00:10:01.428 "name": "nvmf_tgt_poll_group_000", 00:10:01.428 "admin_qpairs": 0, 00:10:01.428 "io_qpairs": 0, 00:10:01.428 "current_admin_qpairs": 0, 00:10:01.428 "current_io_qpairs": 0, 00:10:01.428 "pending_bdev_io": 0, 00:10:01.428 "completed_nvme_io": 0, 00:10:01.428 "transports": [ 00:10:01.428 { 00:10:01.428 "trtype": "TCP" 00:10:01.428 } 00:10:01.428 ] 00:10:01.428 }, 00:10:01.428 { 00:10:01.428 "name": "nvmf_tgt_poll_group_001", 00:10:01.428 "admin_qpairs": 0, 00:10:01.428 "io_qpairs": 0, 00:10:01.428 "current_admin_qpairs": 0, 00:10:01.428 "current_io_qpairs": 0, 00:10:01.428 "pending_bdev_io": 0, 00:10:01.428 "completed_nvme_io": 0, 00:10:01.428 "transports": [ 00:10:01.428 { 00:10:01.428 "trtype": "TCP" 00:10:01.428 } 00:10:01.428 ] 00:10:01.428 }, 00:10:01.428 { 00:10:01.428 "name": "nvmf_tgt_poll_group_002", 00:10:01.428 "admin_qpairs": 0, 00:10:01.428 "io_qpairs": 0, 00:10:01.428 "current_admin_qpairs": 0, 00:10:01.428 "current_io_qpairs": 0, 00:10:01.428 "pending_bdev_io": 0, 00:10:01.428 "completed_nvme_io": 0, 00:10:01.428 "transports": [ 00:10:01.428 { 00:10:01.428 "trtype": "TCP" 00:10:01.428 } 00:10:01.428 ] 00:10:01.428 }, 00:10:01.428 { 00:10:01.428 "name": "nvmf_tgt_poll_group_003", 00:10:01.428 "admin_qpairs": 0, 00:10:01.428 "io_qpairs": 0, 00:10:01.428 "current_admin_qpairs": 0, 00:10:01.428 "current_io_qpairs": 0, 00:10:01.428 "pending_bdev_io": 0, 00:10:01.428 "completed_nvme_io": 0, 00:10:01.428 "transports": [ 00:10:01.428 { 00:10:01.428 "trtype": "TCP" 00:10:01.428 } 00:10:01.428 ] 00:10:01.428 } 00:10:01.428 ] 00:10:01.428 }' 00:10:01.428 01:10:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:10:01.428 01:10:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:01.428 01:10:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:01.428 01:10:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:01.428 01:10:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:10:01.428 01:10:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:10:01.428 01:10:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:01.428 01:10:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:01.428 01:10:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:01.428 01:10:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:10:01.428 01:10:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:10:01.428 01:10:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:10:01.428 01:10:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:10:01.428 01:10:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:01.428 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.428 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.428 Malloc1 00:10:01.428 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.428 01:10:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:01.428 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.429 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.429 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.429 01:10:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:01.429 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.429 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.429 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.429 01:10:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:10:01.429 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.429 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.429 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.429 01:10:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:01.429 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.429 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.429 [2024-07-25 01:10:23.911318] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:01.429 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.429 01:10:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:10:01.429 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:10:01.429 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:10:01.429 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:10:01.429 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:01.429 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:10:01.688 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:01.688 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:10:01.688 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:01.688 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:10:01.688 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:10:01.688 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:10:01.688 [2024-07-25 01:10:23.936055] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:10:01.688 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:01.688 could not add new controller: failed to write to nvme-fabrics device 00:10:01.688 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:10:01.688 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:01.688 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:01.688 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:01.688 01:10:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:01.688 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.688 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.688 01:10:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.688 01:10:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:02.628 01:10:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:10:02.628 01:10:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:02.628 01:10:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:02.628 01:10:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:02.628 01:10:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:05.170 01:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:05.170 01:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:05.170 01:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:05.170 01:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:05.170 01:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:05.170 01:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:05.170 01:10:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:05.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.170 01:10:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:05.170 01:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:05.170 01:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:05.170 01:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:05.170 01:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:05.170 01:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:05.170 01:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:05.170 01:10:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:05.170 01:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.170 01:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.170 01:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.170 01:10:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:05.170 01:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:10:05.170 01:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:05.170 01:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:10:05.170 01:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:05.170 01:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:10:05.170 01:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:05.170 01:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:10:05.170 01:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:05.170 01:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:10:05.170 01:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:10:05.170 01:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:05.170 [2024-07-25 01:10:27.180468] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:10:05.170 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:05.170 could not add new controller: failed to write to nvme-fabrics device 00:10:05.170 01:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:10:05.170 01:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:05.170 01:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:05.170 01:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:05.171 01:10:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:10:05.171 01:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.171 01:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.171 01:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.171 01:10:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:06.109 01:10:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:10:06.109 01:10:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:06.109 01:10:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:06.109 01:10:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:06.109 01:10:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:08.017 01:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:08.017 01:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:08.017 01:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:08.017 01:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:08.017 01:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:08.017 01:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:08.017 01:10:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:08.017 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.017 01:10:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:08.017 01:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:08.017 01:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:08.017 01:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:08.017 01:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:08.017 01:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:08.017 01:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:08.017 01:10:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:08.017 01:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.017 01:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:08.017 01:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.017 01:10:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:10:08.017 01:10:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:08.017 01:10:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:08.017 01:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.017 01:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:08.017 01:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.017 01:10:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:08.017 01:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.017 01:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:08.017 [2024-07-25 01:10:30.422368] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:08.017 01:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.017 01:10:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:08.017 01:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.017 01:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:08.017 01:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.017 01:10:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:08.017 01:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.017 01:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:08.017 01:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.018 01:10:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:09.398 01:10:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:09.398 01:10:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:09.398 01:10:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:09.399 01:10:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:09.399 01:10:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:11.306 01:10:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:11.306 01:10:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:11.306 01:10:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:11.306 01:10:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:11.306 01:10:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:11.306 01:10:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:11.306 01:10:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:11.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.306 01:10:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:11.306 01:10:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:11.306 01:10:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:11.306 01:10:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:11.306 01:10:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:11.306 01:10:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:11.306 01:10:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:11.306 01:10:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:11.306 01:10:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.306 01:10:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.306 01:10:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.306 01:10:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:11.306 01:10:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.306 01:10:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.306 01:10:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.306 01:10:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:11.306 01:10:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:11.306 01:10:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.306 01:10:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.306 01:10:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.306 01:10:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:11.306 01:10:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.306 01:10:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.306 [2024-07-25 01:10:33.694873] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:11.306 01:10:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.306 01:10:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:11.306 01:10:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.306 01:10:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.306 01:10:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.306 01:10:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:11.306 01:10:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.306 01:10:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.306 01:10:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.306 01:10:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:12.688 01:10:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:12.688 01:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:12.688 01:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:12.688 01:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:12.688 01:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:14.599 01:10:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:14.599 01:10:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:14.599 01:10:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:14.599 01:10:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:14.599 01:10:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:14.599 01:10:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:14.599 01:10:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:14.599 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.599 01:10:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:14.599 01:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:14.599 01:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:14.599 01:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:14.599 01:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:14.599 01:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:14.599 01:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:14.599 01:10:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:14.599 01:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.599 01:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.599 01:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.599 01:10:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:14.599 01:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.599 01:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.599 01:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.599 01:10:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:14.599 01:10:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:14.599 01:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.599 01:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.599 01:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.599 01:10:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:14.599 01:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.599 01:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.599 [2024-07-25 01:10:37.075184] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:14.599 01:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.599 01:10:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:14.599 01:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.599 01:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.599 01:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.599 01:10:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:14.599 01:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.599 01:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.859 01:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.859 01:10:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:15.815 01:10:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:15.815 01:10:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:15.815 01:10:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:15.815 01:10:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:15.815 01:10:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:17.791 01:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:17.791 01:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:17.791 01:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:17.791 01:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:17.791 01:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:17.791 01:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:17.791 01:10:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:18.052 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.052 01:10:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:18.052 01:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:18.052 01:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:18.052 01:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:18.052 01:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:18.052 01:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:18.052 01:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:18.052 01:10:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:18.052 01:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.052 01:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.052 01:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.052 01:10:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:18.052 01:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.052 01:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.052 01:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.052 01:10:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:18.052 01:10:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:18.052 01:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.052 01:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.052 01:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.052 01:10:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:18.052 01:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.052 01:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.052 [2024-07-25 01:10:40.371829] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:18.052 01:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.052 01:10:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:18.052 01:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.052 01:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.052 01:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.052 01:10:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:18.052 01:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.052 01:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.052 01:10:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.052 01:10:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:18.993 01:10:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:18.993 01:10:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:18.993 01:10:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:18.993 01:10:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:18.993 01:10:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:21.532 01:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:21.532 01:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:21.532 01:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:21.532 01:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:21.533 01:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:21.533 01:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:21.533 01:10:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:21.533 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.533 01:10:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:21.533 01:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:21.533 01:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:21.533 01:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:21.533 01:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:21.533 01:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:21.533 01:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:21.533 01:10:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:21.533 01:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.533 01:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:21.533 01:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.533 01:10:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:21.533 01:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.533 01:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:21.533 01:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.533 01:10:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:21.533 01:10:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:21.533 01:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.533 01:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:21.533 01:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.533 01:10:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:21.533 01:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.533 01:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:21.533 [2024-07-25 01:10:43.646307] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:21.533 01:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.533 01:10:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:21.533 01:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.533 01:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:21.533 01:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.533 01:10:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:21.533 01:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.533 01:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:21.533 01:10:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.533 01:10:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:22.474 01:10:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:22.474 01:10:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:22.474 01:10:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:22.474 01:10:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:22.474 01:10:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:24.382 01:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:24.382 01:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:24.382 01:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:24.382 01:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:24.382 01:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:24.382 01:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:24.382 01:10:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:24.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.643 [2024-07-25 01:10:46.956346] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.643 01:10:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:24.643 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.643 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.643 [2024-07-25 01:10:47.004459] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:24.643 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.643 01:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:24.643 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.643 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.643 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.643 01:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:24.643 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.643 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.643 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.643 01:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.643 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.643 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.643 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.643 01:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:24.643 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.643 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.643 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.643 01:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:24.643 01:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:24.643 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.643 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.643 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.643 01:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:24.643 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.643 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.643 [2024-07-25 01:10:47.056615] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:24.643 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.643 01:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:24.643 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.643 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.643 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.643 01:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:24.643 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.643 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.643 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.643 01:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.643 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.643 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.643 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.644 01:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:24.644 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.644 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.644 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.644 01:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:24.644 01:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:24.644 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.644 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.644 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.644 01:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:24.644 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.644 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.644 [2024-07-25 01:10:47.104789] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:24.644 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.644 01:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:24.644 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.644 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.644 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.644 01:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:24.644 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.644 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.644 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.644 01:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.644 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.644 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.644 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.644 01:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:24.644 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.644 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.904 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.904 01:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:24.904 01:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:24.904 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.904 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.904 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.904 01:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:24.904 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.904 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.904 [2024-07-25 01:10:47.152962] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:24.904 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.904 01:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:24.904 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.904 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.904 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.904 01:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:24.904 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.904 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.904 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.904 01:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.904 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.904 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.904 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.904 01:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:24.904 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.904 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.904 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.904 01:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:10:24.904 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.904 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.904 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.904 01:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:10:24.904 "tick_rate": 2300000000, 00:10:24.904 "poll_groups": [ 00:10:24.904 { 00:10:24.904 "name": "nvmf_tgt_poll_group_000", 00:10:24.904 "admin_qpairs": 2, 00:10:24.904 "io_qpairs": 168, 00:10:24.904 "current_admin_qpairs": 0, 00:10:24.904 "current_io_qpairs": 0, 00:10:24.904 "pending_bdev_io": 0, 00:10:24.904 "completed_nvme_io": 270, 00:10:24.904 "transports": [ 00:10:24.904 { 00:10:24.904 "trtype": "TCP" 00:10:24.904 } 00:10:24.904 ] 00:10:24.904 }, 00:10:24.904 { 00:10:24.904 "name": "nvmf_tgt_poll_group_001", 00:10:24.904 "admin_qpairs": 2, 00:10:24.904 "io_qpairs": 168, 00:10:24.904 "current_admin_qpairs": 0, 00:10:24.904 "current_io_qpairs": 0, 00:10:24.904 "pending_bdev_io": 0, 00:10:24.904 "completed_nvme_io": 218, 00:10:24.904 "transports": [ 00:10:24.904 { 00:10:24.904 "trtype": "TCP" 00:10:24.904 } 00:10:24.904 ] 00:10:24.904 }, 00:10:24.904 { 00:10:24.904 "name": "nvmf_tgt_poll_group_002", 00:10:24.904 "admin_qpairs": 1, 00:10:24.904 "io_qpairs": 168, 00:10:24.904 "current_admin_qpairs": 0, 00:10:24.904 "current_io_qpairs": 0, 00:10:24.904 "pending_bdev_io": 0, 00:10:24.904 "completed_nvme_io": 267, 00:10:24.904 "transports": [ 00:10:24.904 { 00:10:24.904 "trtype": "TCP" 00:10:24.904 } 00:10:24.904 ] 00:10:24.904 }, 00:10:24.904 { 00:10:24.904 "name": "nvmf_tgt_poll_group_003", 00:10:24.904 "admin_qpairs": 2, 00:10:24.904 "io_qpairs": 168, 00:10:24.904 "current_admin_qpairs": 0, 00:10:24.904 "current_io_qpairs": 0, 00:10:24.905 "pending_bdev_io": 0, 00:10:24.905 "completed_nvme_io": 267, 00:10:24.905 "transports": [ 00:10:24.905 { 00:10:24.905 "trtype": "TCP" 00:10:24.905 } 00:10:24.905 ] 00:10:24.905 } 00:10:24.905 ] 00:10:24.905 }' 00:10:24.905 01:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:10:24.905 01:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:24.905 01:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:24.905 01:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:24.905 01:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:10:24.905 01:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:10:24.905 01:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:24.905 01:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:24.905 01:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:24.905 01:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:10:24.905 01:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:10:24.905 01:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:10:24.905 01:10:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:10:24.905 01:10:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:24.905 01:10:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:10:24.905 01:10:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:24.905 01:10:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:10:24.905 01:10:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:24.905 01:10:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:24.905 rmmod nvme_tcp 00:10:24.905 rmmod nvme_fabrics 00:10:24.905 rmmod nvme_keyring 00:10:24.905 01:10:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:24.905 01:10:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:10:24.905 01:10:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:10:24.905 01:10:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 773291 ']' 00:10:24.905 01:10:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 773291 00:10:24.905 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 773291 ']' 00:10:24.905 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 773291 00:10:24.905 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:10:24.905 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:24.905 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 773291 00:10:25.165 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:25.165 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:25.165 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 773291' 00:10:25.165 killing process with pid 773291 00:10:25.165 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 773291 00:10:25.165 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 773291 00:10:25.165 01:10:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:25.165 01:10:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:25.165 01:10:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:25.165 01:10:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:25.165 01:10:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:25.165 01:10:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.165 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:25.165 01:10:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.707 01:10:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:27.707 00:10:27.707 real 0m32.497s 00:10:27.707 user 1m40.048s 00:10:27.707 sys 0m5.576s 00:10:27.707 01:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:27.707 01:10:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.707 ************************************ 00:10:27.707 END TEST nvmf_rpc 00:10:27.707 ************************************ 00:10:27.707 01:10:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:27.707 01:10:49 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:27.707 01:10:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:27.707 01:10:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:27.707 01:10:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:27.707 ************************************ 00:10:27.707 START TEST nvmf_invalid 00:10:27.707 ************************************ 00:10:27.707 01:10:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:27.707 * Looking for test storage... 00:10:27.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:27.707 01:10:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:27.707 01:10:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:10:27.707 01:10:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:27.707 01:10:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:10:27.708 01:10:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:33.042 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:33.042 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:33.042 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:33.043 Found net devices under 0000:86:00.0: cvl_0_0 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:33.043 Found net devices under 0000:86:00.1: cvl_0_1 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:33.043 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:33.043 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:10:33.043 00:10:33.043 --- 10.0.0.2 ping statistics --- 00:10:33.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.043 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:33.043 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:33.043 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:10:33.043 00:10:33.043 --- 10.0.0.1 ping statistics --- 00:10:33.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.043 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=781070 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 781070 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 781070 ']' 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:33.043 01:10:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:33.043 [2024-07-25 01:10:55.413178] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:10:33.043 [2024-07-25 01:10:55.413224] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:33.043 EAL: No free 2048 kB hugepages reported on node 1 00:10:33.043 [2024-07-25 01:10:55.466244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:33.303 [2024-07-25 01:10:55.548036] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:33.303 [2024-07-25 01:10:55.548076] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:33.303 [2024-07-25 01:10:55.548083] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:33.303 [2024-07-25 01:10:55.548089] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:33.303 [2024-07-25 01:10:55.548094] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:33.303 [2024-07-25 01:10:55.548134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.303 [2024-07-25 01:10:55.548228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:33.303 [2024-07-25 01:10:55.548245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:33.303 [2024-07-25 01:10:55.548246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.872 01:10:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:33.872 01:10:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:10:33.872 01:10:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:33.872 01:10:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:33.872 01:10:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:33.872 01:10:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:33.872 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:33.872 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode1839 00:10:34.131 [2024-07-25 01:10:56.432485] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:10:34.131 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:10:34.131 { 00:10:34.131 "nqn": "nqn.2016-06.io.spdk:cnode1839", 00:10:34.131 "tgt_name": "foobar", 00:10:34.131 "method": "nvmf_create_subsystem", 00:10:34.131 "req_id": 1 00:10:34.131 } 00:10:34.131 Got JSON-RPC error response 00:10:34.131 response: 00:10:34.131 { 00:10:34.131 "code": -32603, 00:10:34.131 "message": "Unable to find target foobar" 00:10:34.131 }' 00:10:34.131 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:10:34.131 { 00:10:34.131 "nqn": "nqn.2016-06.io.spdk:cnode1839", 00:10:34.131 "tgt_name": "foobar", 00:10:34.131 "method": "nvmf_create_subsystem", 00:10:34.131 "req_id": 1 00:10:34.131 } 00:10:34.131 Got JSON-RPC error response 00:10:34.131 response: 00:10:34.131 { 00:10:34.131 "code": -32603, 00:10:34.131 "message": "Unable to find target foobar" 00:10:34.131 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:10:34.131 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:10:34.131 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode22954 00:10:34.390 [2024-07-25 01:10:56.633235] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22954: invalid serial number 'SPDKISFASTANDAWESOME' 00:10:34.390 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:10:34.390 { 00:10:34.390 "nqn": "nqn.2016-06.io.spdk:cnode22954", 00:10:34.390 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:34.390 "method": "nvmf_create_subsystem", 00:10:34.390 "req_id": 1 00:10:34.390 } 00:10:34.390 Got JSON-RPC error response 00:10:34.390 response: 00:10:34.390 { 00:10:34.390 "code": -32602, 00:10:34.390 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:34.390 }' 00:10:34.390 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:10:34.390 { 00:10:34.390 "nqn": "nqn.2016-06.io.spdk:cnode22954", 00:10:34.390 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:34.390 "method": "nvmf_create_subsystem", 00:10:34.390 "req_id": 1 00:10:34.390 } 00:10:34.390 Got JSON-RPC error response 00:10:34.390 response: 00:10:34.390 { 00:10:34.390 "code": -32602, 00:10:34.390 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:34.390 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:34.390 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:10:34.390 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode11841 00:10:34.390 [2024-07-25 01:10:56.821841] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11841: invalid model number 'SPDK_Controller' 00:10:34.390 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:10:34.390 { 00:10:34.390 "nqn": "nqn.2016-06.io.spdk:cnode11841", 00:10:34.390 "model_number": "SPDK_Controller\u001f", 00:10:34.390 "method": "nvmf_create_subsystem", 00:10:34.390 "req_id": 1 00:10:34.390 } 00:10:34.390 Got JSON-RPC error response 00:10:34.390 response: 00:10:34.390 { 00:10:34.390 "code": -32602, 00:10:34.390 "message": "Invalid MN SPDK_Controller\u001f" 00:10:34.390 }' 00:10:34.390 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:10:34.390 { 00:10:34.390 "nqn": "nqn.2016-06.io.spdk:cnode11841", 00:10:34.390 "model_number": "SPDK_Controller\u001f", 00:10:34.390 "method": "nvmf_create_subsystem", 00:10:34.390 "req_id": 1 00:10:34.390 } 00:10:34.390 Got JSON-RPC error response 00:10:34.390 response: 00:10:34.390 { 00:10:34.390 "code": -32602, 00:10:34.390 "message": "Invalid MN SPDK_Controller\u001f" 00:10:34.390 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:34.390 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:10:34.390 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:10:34.390 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:34.390 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:34.390 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:34.390 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:34.390 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.390 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:10:34.390 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:10:34.390 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:10:34.390 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.391 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.391 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:10:34.391 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:10:34.391 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:10:34.391 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.391 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.391 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:10:34.391 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:10:34.391 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:10:34.391 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.391 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.391 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:10:34.391 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:10:34.391 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:10:34.391 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.391 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.391 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:10:34.391 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:10:34.391 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:10:34.391 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.391 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.650 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.651 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:10:34.651 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:10:34.651 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:10:34.651 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.651 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.651 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:10:34.651 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:10:34.651 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:10:34.651 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.651 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.651 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:10:34.651 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:10:34.651 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:10:34.651 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.651 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.651 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:10:34.651 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:10:34.651 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:10:34.651 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.651 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.651 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ 8 == \- ]] 00:10:34.651 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '8quX}]o]j#F9C%M(Za\{.' 00:10:34.651 01:10:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '8quX}]o]j#F9C%M(Za\{.' nqn.2016-06.io.spdk:cnode22680 00:10:34.912 [2024-07-25 01:10:57.146918] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22680: invalid serial number '8quX}]o]j#F9C%M(Za\{.' 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:10:34.912 { 00:10:34.912 "nqn": "nqn.2016-06.io.spdk:cnode22680", 00:10:34.912 "serial_number": "8quX}]o]j#F9C%M(Za\\{.", 00:10:34.912 "method": "nvmf_create_subsystem", 00:10:34.912 "req_id": 1 00:10:34.912 } 00:10:34.912 Got JSON-RPC error response 00:10:34.912 response: 00:10:34.912 { 00:10:34.912 "code": -32602, 00:10:34.912 "message": "Invalid SN 8quX}]o]j#F9C%M(Za\\{." 00:10:34.912 }' 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:10:34.912 { 00:10:34.912 "nqn": "nqn.2016-06.io.spdk:cnode22680", 00:10:34.912 "serial_number": "8quX}]o]j#F9C%M(Za\\{.", 00:10:34.912 "method": "nvmf_create_subsystem", 00:10:34.912 "req_id": 1 00:10:34.912 } 00:10:34.912 Got JSON-RPC error response 00:10:34.912 response: 00:10:34.912 { 00:10:34.912 "code": -32602, 00:10:34.912 "message": "Invalid SN 8quX}]o]j#F9C%M(Za\\{." 00:10:34.912 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.912 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:10:34.913 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:10:35.174 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:10:35.174 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:35.174 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:35.174 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:10:35.174 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:10:35.174 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:10:35.174 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:35.174 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:35.174 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:10:35.174 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:10:35.174 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:10:35.174 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:35.174 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:35.174 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:10:35.174 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:10:35.174 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:10:35.174 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:35.174 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:35.174 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:10:35.174 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:10:35.174 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:10:35.174 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:35.174 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:35.174 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:10:35.174 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:10:35.174 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:10:35.174 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:35.174 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:35.174 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ f == \- ]] 00:10:35.174 01:10:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'fyqZ*sz-ek9ps7tO$PFUxNcaXy2CCe:@r(mVl /dev/null' 00:10:37.251 01:10:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.160 01:11:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:39.160 00:10:39.160 real 0m11.858s 00:10:39.160 user 0m19.733s 00:10:39.160 sys 0m5.069s 00:10:39.160 01:11:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:39.160 01:11:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:39.160 ************************************ 00:10:39.160 END TEST nvmf_invalid 00:10:39.160 ************************************ 00:10:39.160 01:11:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:39.160 01:11:01 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:39.160 01:11:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:39.160 01:11:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:39.160 01:11:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:39.420 ************************************ 00:10:39.420 START TEST nvmf_abort 00:10:39.420 ************************************ 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:39.421 * Looking for test storage... 00:10:39.421 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:10:39.421 01:11:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:44.783 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:44.783 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:44.783 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:44.784 Found net devices under 0000:86:00.0: cvl_0_0 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:44.784 Found net devices under 0000:86:00.1: cvl_0_1 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:44.784 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:44.784 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:10:44.784 00:10:44.784 --- 10.0.0.2 ping statistics --- 00:10:44.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.784 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:44.784 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:44.784 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.362 ms 00:10:44.784 00:10:44.784 --- 10.0.0.1 ping statistics --- 00:10:44.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.784 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=785244 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 785244 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 785244 ']' 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:44.784 01:11:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:44.784 [2024-07-25 01:11:06.993210] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:10:44.784 [2024-07-25 01:11:06.993250] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:44.784 EAL: No free 2048 kB hugepages reported on node 1 00:10:44.784 [2024-07-25 01:11:07.049379] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:44.784 [2024-07-25 01:11:07.128268] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:44.784 [2024-07-25 01:11:07.128306] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:44.784 [2024-07-25 01:11:07.128313] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:44.784 [2024-07-25 01:11:07.128319] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:44.784 [2024-07-25 01:11:07.128324] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:44.784 [2024-07-25 01:11:07.128435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:44.784 [2024-07-25 01:11:07.128541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:44.784 [2024-07-25 01:11:07.128543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:45.354 01:11:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:45.354 01:11:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:10:45.354 01:11:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:45.354 01:11:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:45.354 01:11:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:45.354 01:11:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:45.354 01:11:07 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:10:45.354 01:11:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.354 01:11:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:45.354 [2024-07-25 01:11:07.844674] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:45.614 01:11:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.614 01:11:07 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:10:45.614 01:11:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.615 01:11:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:45.615 Malloc0 00:10:45.615 01:11:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.615 01:11:07 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:45.615 01:11:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.615 01:11:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:45.615 Delay0 00:10:45.615 01:11:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.615 01:11:07 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:45.615 01:11:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.615 01:11:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:45.615 01:11:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.615 01:11:07 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:10:45.615 01:11:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.615 01:11:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:45.615 01:11:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.615 01:11:07 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:45.615 01:11:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.615 01:11:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:45.615 [2024-07-25 01:11:07.903791] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:45.615 01:11:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.615 01:11:07 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:45.615 01:11:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.615 01:11:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:45.615 01:11:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.615 01:11:07 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:10:45.615 EAL: No free 2048 kB hugepages reported on node 1 00:10:45.615 [2024-07-25 01:11:08.009600] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:48.157 Initializing NVMe Controllers 00:10:48.157 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:48.157 controller IO queue size 128 less than required 00:10:48.157 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:10:48.157 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:10:48.157 Initialization complete. Launching workers. 00:10:48.157 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 124, failed: 41591 00:10:48.157 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 41653, failed to submit 62 00:10:48.157 success 41595, unsuccess 58, failed 0 00:10:48.157 01:11:10 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:48.157 01:11:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.157 01:11:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:48.157 01:11:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.157 01:11:10 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:10:48.157 01:11:10 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:10:48.157 01:11:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:48.157 01:11:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:10:48.157 01:11:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:48.157 01:11:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:10:48.157 01:11:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:48.157 01:11:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:48.157 rmmod nvme_tcp 00:10:48.157 rmmod nvme_fabrics 00:10:48.157 rmmod nvme_keyring 00:10:48.157 01:11:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:48.157 01:11:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:10:48.157 01:11:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:10:48.157 01:11:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 785244 ']' 00:10:48.157 01:11:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 785244 00:10:48.157 01:11:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 785244 ']' 00:10:48.157 01:11:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 785244 00:10:48.157 01:11:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:10:48.157 01:11:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:48.157 01:11:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 785244 00:10:48.157 01:11:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:48.157 01:11:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:48.157 01:11:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 785244' 00:10:48.157 killing process with pid 785244 00:10:48.157 01:11:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 785244 00:10:48.157 01:11:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 785244 00:10:48.157 01:11:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:48.157 01:11:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:48.157 01:11:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:48.157 01:11:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:48.157 01:11:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:48.157 01:11:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:48.157 01:11:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:48.157 01:11:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.070 01:11:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:50.070 00:10:50.070 real 0m10.853s 00:10:50.070 user 0m13.003s 00:10:50.070 sys 0m4.930s 00:10:50.070 01:11:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:50.070 01:11:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:50.070 ************************************ 00:10:50.070 END TEST nvmf_abort 00:10:50.070 ************************************ 00:10:50.070 01:11:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:50.070 01:11:12 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:50.070 01:11:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:50.070 01:11:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:50.070 01:11:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:50.330 ************************************ 00:10:50.330 START TEST nvmf_ns_hotplug_stress 00:10:50.330 ************************************ 00:10:50.330 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:50.330 * Looking for test storage... 00:10:50.330 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:50.330 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:50.330 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:10:50.330 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:50.330 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:50.330 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:50.330 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:50.330 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:50.330 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:50.330 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:50.330 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:50.330 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:50.330 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:50.330 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:50.330 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:50.331 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:50.331 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:50.331 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:50.331 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:50.331 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:50.331 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:50.331 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:50.331 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:50.331 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.331 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.331 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.331 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:10:50.331 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.331 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:10:50.331 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:50.331 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:50.331 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:50.331 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:50.331 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:50.331 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:50.331 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:50.331 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:50.331 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:50.331 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:10:50.331 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:50.331 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:50.331 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:50.331 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:50.331 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:50.331 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.331 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:50.331 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.331 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:50.331 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:50.331 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:50.331 01:11:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:55.616 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:55.616 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.616 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:55.617 Found net devices under 0000:86:00.0: cvl_0_0 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:55.617 Found net devices under 0000:86:00.1: cvl_0_1 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:55.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:55.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:10:55.617 00:10:55.617 --- 10.0.0.2 ping statistics --- 00:10:55.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.617 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:55.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:55.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.410 ms 00:10:55.617 00:10:55.617 --- 10.0.0.1 ping statistics --- 00:10:55.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.617 rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=789104 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 789104 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 789104 ']' 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:55.617 01:11:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.617 [2024-07-25 01:11:17.402971] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:10:55.617 [2024-07-25 01:11:17.403014] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:55.617 EAL: No free 2048 kB hugepages reported on node 1 00:10:55.617 [2024-07-25 01:11:17.459703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:55.617 [2024-07-25 01:11:17.538975] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:55.617 [2024-07-25 01:11:17.539010] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:55.617 [2024-07-25 01:11:17.539017] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:55.617 [2024-07-25 01:11:17.539023] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:55.617 [2024-07-25 01:11:17.539028] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:55.617 [2024-07-25 01:11:17.539126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:55.617 [2024-07-25 01:11:17.539146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:55.617 [2024-07-25 01:11:17.539148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.877 01:11:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:55.877 01:11:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:10:55.877 01:11:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:55.877 01:11:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:55.877 01:11:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.877 01:11:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:55.877 01:11:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:10:55.877 01:11:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:56.137 [2024-07-25 01:11:18.407535] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:56.137 01:11:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:56.137 01:11:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:56.397 [2024-07-25 01:11:18.792526] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:56.397 01:11:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:56.656 01:11:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:56.915 Malloc0 00:10:56.915 01:11:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:56.915 Delay0 00:10:56.915 01:11:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:57.174 01:11:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:57.434 NULL1 00:10:57.434 01:11:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:57.434 01:11:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:57.434 01:11:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=789518 00:10:57.434 01:11:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 789518 00:10:57.434 01:11:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.694 EAL: No free 2048 kB hugepages reported on node 1 00:10:58.633 Read completed with error (sct=0, sc=11) 00:10:58.633 01:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:58.633 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:58.893 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:58.893 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:58.893 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:58.893 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:58.893 01:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:58.893 01:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:59.153 true 00:10:59.153 01:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 789518 00:10:59.153 01:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:00.122 01:11:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:00.122 01:11:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:11:00.122 01:11:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:11:00.383 true 00:11:00.383 01:11:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 789518 00:11:00.383 01:11:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:00.644 01:11:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:00.644 01:11:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:11:00.644 01:11:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:11:00.904 true 00:11:00.904 01:11:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 789518 00:11:00.904 01:11:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:02.281 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:02.281 01:11:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:02.281 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:02.281 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:02.281 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:02.281 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:02.281 01:11:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:11:02.281 01:11:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:11:02.540 true 00:11:02.540 01:11:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 789518 00:11:02.540 01:11:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:03.107 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:03.367 01:11:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:03.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:03.367 01:11:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:11:03.367 01:11:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:11:03.626 true 00:11:03.626 01:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 789518 00:11:03.626 01:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:03.886 01:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:03.886 01:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:11:03.886 01:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:11:04.145 true 00:11:04.145 01:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 789518 00:11:04.145 01:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:04.410 01:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:04.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:04.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:04.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:04.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:04.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:04.410 [2024-07-25 01:11:26.875137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.875212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.875255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.875302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.875338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.875376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.875413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.875441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.875482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.875518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.875553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.875590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.875624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.875661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.875700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.875737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.875776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.875811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.875849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.875890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.875920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.875956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.875995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.876032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.876077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.876113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.876148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.876184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.876219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.876258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.876296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.876333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.876369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.876407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.876448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.876490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.876532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.876574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.876620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.876660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.876702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.876743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.876783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.876821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.876864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.876907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.876953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.876994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.877039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.877083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.877126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.877180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.877221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.877266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.877306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.877350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.877397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.877440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.877481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.877522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.877562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.877611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.877653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.877694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.877886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.877927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.877971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.878012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.878055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.878093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.878131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.878162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.410 [2024-07-25 01:11:26.878198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.878234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.878269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.878616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.878655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.878687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.878722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.878764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.878809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.878845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.878881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.878913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.878952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.878990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.879028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.879073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.879114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.879150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.879186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.879214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.879241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.879275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.879310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.879349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.879389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.879427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.879462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.879504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.879544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.879591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.879631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.879670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.879726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.879768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.879814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.879856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.879899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.879946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.879993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.880039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.880085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.880132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.880176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.880220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.880261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.880302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.880346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.880392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.880438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.880481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.880522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.880560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.880595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.880630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.880668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.880710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.881207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.881252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.881289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.881326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.881367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.881403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.881440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.881478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.881520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.881563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.881605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.881646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.881693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.881736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.881777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.881817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.881860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.881900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.881942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.881984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.882030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.882077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.882117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.882161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.882203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.882248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.882284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.882318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.882347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.882389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.882430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.882467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.882505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.882549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.882587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.882623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.882660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.882691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.882730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.882770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.411 [2024-07-25 01:11:26.882816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.882857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.882894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.882932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.882971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.883011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.883056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.883098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.883145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.883189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.883231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.883278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.883322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.883363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.883405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.883445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.883486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.883523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.883565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.883595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.883634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.883672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.883716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.883753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.883944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.883981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.884021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.884061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.884107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.884153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.884196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.884241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.884286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.884330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.884372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.884707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.884755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.884804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.884847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.884890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.884946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.884986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.885030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.885080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.885122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.885170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.885211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.885258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.885305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.885351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.885394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.885437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.885483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.885528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.885559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.885598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.885633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.885678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.885723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.885772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.885819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.885862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.885903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.885942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.885981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.886015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.886051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.886084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.886123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.886162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.886202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.886243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.886284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.886322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.886364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.886399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.886438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.886473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.886509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.886550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.886593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.886633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.886676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.886736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.886777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.886823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.886866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.887392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.887440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.887483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.887528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.887569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.887617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.887657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.887701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.887743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.412 [2024-07-25 01:11:26.887799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.887843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.887885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.887939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.887979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.888024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.888070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.888112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.888163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.888209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.888252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.888295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.888337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.888379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.888421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.888461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.888499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.888529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.888568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.888606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.888648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.888685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.888724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.888762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.888802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.888841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.888880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.888918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.888955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.888994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.889023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.889067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.889105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.889142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.889180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.889217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.889254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.889290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.889329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.889361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.889396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.889436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.889471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.889511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.889554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.889591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.889634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.889673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.889708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.889746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.889788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.889836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.889883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.889934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.889977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.890157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.890204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.890247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.890290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.890330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.890372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.890413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.890456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.890502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.890552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.890598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.890925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.890975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.891011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.891050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.891087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.891127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.891165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.891199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.891235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.891272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.891307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.891343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.891380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.891415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.891457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.891498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.891536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.891571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.891605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.891646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.891676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.891715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.891751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.891784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.891827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.891868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.891910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.891953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.892001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.892047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.892090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.892132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.413 [2024-07-25 01:11:26.892172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.892213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.892258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.892301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.892344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.892388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.892435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.892480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.892523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.892567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.892610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.892654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.892693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.892730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.892767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.892804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.892852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.892894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.892934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.892971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.893013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.893589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.893632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.893669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.893707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.893753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.893794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.893839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.893889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.893938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.893979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.894023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.894070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.894111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.894153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.894196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.894243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.894285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.894330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.894377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.894423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.894467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.894516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.894559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.894606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.894654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.894701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.894746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.894793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.894836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.894875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.894926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.894969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.895017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.895074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.895117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.895164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.895207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.895252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.895297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.895341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.895385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.895429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.895469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.895512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.895555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.895598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.895633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.895671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.895714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.895760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.895800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.895834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.895866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.895903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.895939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.414 [2024-07-25 01:11:26.895984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.896032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.896074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.896114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.896151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.896189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.896226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.896264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.896308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.896519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.896562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.896603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.896641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.896675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.896713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.896751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.896787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.896825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.896862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.896900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.897241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.897288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.897331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.897379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.897424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.897467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.897511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.897554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.897601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.897647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.897691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.897727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.897763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.897797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.897833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.897869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.897910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.897948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.897983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.898021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.898064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.898103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.898140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.898181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.898222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.898261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.898299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.898335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.898371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.898402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.898438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.898481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.898521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.898562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.898599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.898635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.898672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.898709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.898752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.898797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.898840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.898885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.898929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.898977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.899020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.899067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.899112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.899155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.899199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.899241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.899288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.415 [2024-07-25 01:11:26.899339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.899873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.899923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.899965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.900010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.900054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.900101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.900146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.900195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.900232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.900270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.900311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.900349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.900386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.900425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.900461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.900497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.900535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.900568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.900605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.900644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.900682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.900719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.900758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.900793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.900831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.900876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.900922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.900962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.900998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.901033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.901072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.901111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.901153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.901189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.901227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.901266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.901306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.901345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.901382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.901423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.901464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.901507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.901549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.901599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.901641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.901683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.901726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.901782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.901821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.901869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.901913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.901955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.901998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.902045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.902091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.902136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.902183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.902226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.902270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.902323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.902366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.902413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.902455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.902499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.902681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.902722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.902765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.902811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.902853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.902894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.902935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.902971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.903002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.903039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.903085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.903426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.903465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.903504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.903538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.715 [2024-07-25 01:11:26.903576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.903614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.903655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.903691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.903728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.903762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.903800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.903835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.903878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.903919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.903962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.904002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.904047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.904087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.904126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.904166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.904208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.904253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.904300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.904342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.904387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.904431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.904482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.904529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.904576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.904620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 01:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:11:04.716 [2024-07-25 01:11:26.904663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.904702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.904739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.904776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.904806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.904848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.904884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.904927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 01:11:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:11:04.716 [2024-07-25 01:11:26.904966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.905007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.905047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.905086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.905119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.905167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.905213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.905258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.905302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.905343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.905389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.905430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.905477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.905520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.905566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.906078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.906115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.906156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.906193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.906234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.906273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.906312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.906348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.906385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.906423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.906465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.906500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.906535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.906574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.906618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.906661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.906704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.906742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.906781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.906822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.906862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.906906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.906943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.906984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.907022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.907067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.907110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.907156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.907200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.907251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.907291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.907334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.907382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.907421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.907468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.907511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.907554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.907602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.716 [2024-07-25 01:11:26.907651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.907695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.907740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.907783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.907826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.907869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.907913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.907964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.908011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.908059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.908104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.908147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.908195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.908235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.908277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.908320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.908363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.908408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.908452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.908496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.908544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.908588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.908630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.908674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.908717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.908760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.908955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.909000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.909047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.909088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.909126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.909163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.909209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.909250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.909289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.909325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.909357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.909688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.909732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.909770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.909812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.909842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.909883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.909926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.909966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.910003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.910046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.910091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.910134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.910180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:04.717 [2024-07-25 01:11:26.910220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.910258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.910288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.910329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.910367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.910404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.910444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.910484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.910524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.910561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.910600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.910637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.910675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.910716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.910762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.910802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.910848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.910893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.910937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.910985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.911038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.911087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.911129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.911174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.911213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.911253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.911302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.911346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.911388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.911433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.911465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.911501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.911542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.911586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.911623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.911663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.911704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.911743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.911780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.911819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.912347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.912394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.912431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.912470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.912509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.717 [2024-07-25 01:11:26.912545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.912583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.912633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.912677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.912719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.912761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.912804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.912848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.912893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.912935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.912977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.913022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.913078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.913120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.913166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.913213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.913256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.913298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.913346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.913394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.913439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.913490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.913537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.913580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.913623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.913671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.913715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.913760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.913803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.913844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.913888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.913932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.913974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.914020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.914074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.914117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.914163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.914203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.914245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.914285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.914329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.914372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.914414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.914454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.914495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.914531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.914571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.914609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.914646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.914680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.914718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.914759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.914801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.914843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.914888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.914937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.914984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.915024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.915069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.915236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.915276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.915315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.915356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.915394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.915436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.915480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.915518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.915556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.915600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.915942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.915992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.916040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.916088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.916129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.916171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.916226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.916266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.916309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.916360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.916404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.916448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.916500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.916546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.916584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.916621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.718 [2024-07-25 01:11:26.916660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.916704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.916750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.916781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.916821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.916861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.916904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.916944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.916982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.917021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.917062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.917095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.917130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.917165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.917204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.917247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.917285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.917322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.917360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.917389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.917421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.917458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.917499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.917533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.917573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.917613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.917652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.917692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.917738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.917780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.917823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.917869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.917914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.917972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.918014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.918056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.918099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.918144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.918638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.918686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.918719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.918751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.918791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.918830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.918870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.918908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.918946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.918988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.919029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.919068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.919108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.919150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.919191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.919228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.919272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.919312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.919356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.919397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.919442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.919485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.919531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.919572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.919616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.919661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.919703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.919746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.919790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.919834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.919874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.919917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.919959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.920006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.920057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.920098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.920144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.920185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.920231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.920274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.920316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.920357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.920403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.920447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.719 [2024-07-25 01:11:26.920490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.920535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.920574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.920616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.920666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.920706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.920747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.920796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.920839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.920882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.920925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.920967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.921004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.921051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.921096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.921134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.921174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.921212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.921241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.921282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.921454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.921491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.921537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.921576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.921613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.921651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.921688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.921718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.921756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.921796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.922192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.922244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.922285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.922326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.922367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.922406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.922448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.922491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.922534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.922577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.922626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.922672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.922715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.922764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.922805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.922849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.922899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.922944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.922986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.923028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.923077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.923122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.923162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.923204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.923259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.923304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.923342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.923379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.923409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.923446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.923488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.923526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.923564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.923604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.923646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.923684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.923722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.923760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.923796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.923847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.923889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.923927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.923963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.923998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.924037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.924080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.924120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.924158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.924195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.924239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.924277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.924316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.924356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.924397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.924899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.924946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.720 [2024-07-25 01:11:26.924992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.925040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.925089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.925135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.925178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.925223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.925265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.925309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.925353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.925397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.925441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.925500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.925541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.925583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.925632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.925673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.925714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.925757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.925794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.925827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.925869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.925907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.925946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.925985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.926022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.926064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.926107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.926146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.926184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.926221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.926260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.926297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.926326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.926362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.926401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.926444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.926484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.926530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.926565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.926605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.926640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.926680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.926721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.926758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.926801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.926840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.926878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.926920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.926958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.926996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.927033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.927080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.927123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.927168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.927215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.927260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.927304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.927346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.927389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.927430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.927475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.927524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.927694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.927739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.927798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.927841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.927882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.927929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.927974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.928017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.928064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.928107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.928427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.928470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.928506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.928545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.928586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.928622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.928659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.928698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.928738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.928774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.928812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.928849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.928887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.928927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.928957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.928992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.721 [2024-07-25 01:11:26.929030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.929075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.929114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.929151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.929194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.929232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.929273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.929314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.929351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.929392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.929429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.929469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.929512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.929554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.929599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.929650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.929697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.929742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.929786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.929829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.929870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.929916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.929966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.930010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.930056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.930107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.930149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.930194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.930237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.930282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.930326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.930374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.930429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.930467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.930508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.930546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.930587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.931086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.931128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.931165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.931201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.931239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.931281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.931323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.931363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.931403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.931445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.931484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.931521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.931566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.931606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.931652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.931692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.931734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.931784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.931830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.931874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.931917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.931961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.932004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.932051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.932096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.932137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.932180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.932225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.932271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.932315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.932364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.932407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.932449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.932489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.932532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.932575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.932613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.932651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.932690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.932722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.932760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.932799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.932835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.932872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.932911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.932950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.932992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.933036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.933078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.933116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.933157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.933191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.933233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.933273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.933321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.933363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.933406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.933448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.933494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.933533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.933581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.722 [2024-07-25 01:11:26.933626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.933672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.933716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.933899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.933947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.933990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.934029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.934080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.934124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.934168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.934213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.934255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.934295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.934624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.934677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.934719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.934755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.934793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.934826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.934864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.934900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.934942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.934984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.935021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.935061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.935098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.935140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.935177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.935214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.935251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.935290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.935333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.935376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.935422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.935470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.935515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.935563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.935606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.935650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.935692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.935734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.935779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.935820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.935865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.935901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.935936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.935972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.936011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.936051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.936093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.936132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.936173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.936207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.936243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.936287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.936329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.936376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.936417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.936461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.936506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.936552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.936595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.936641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.936682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.936728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.936778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.936825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.937340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.937382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.937419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.937457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.937496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.937534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.937563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.937603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.937640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.937686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.937730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.937774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.937811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.937847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.937883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.937920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.937964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.938002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.938040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.938084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.938125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.723 [2024-07-25 01:11:26.938168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.938207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.938254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.938292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.938334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.938373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.938411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.938453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.938496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.938551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.938600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.938647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.938696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.938735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.938785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.938824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.938870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.938913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.938956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.938999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.939060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.939103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.939149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.939191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.939236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.939280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.939324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.939372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.939414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.939461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.939506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.939551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.939599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.939640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.939681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.939730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.939772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.939817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.939867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.939907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.939950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.939991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.940036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.940227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.940272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.940317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.940355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.940394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.940432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.940471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.940504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.940539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.940579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.940900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.940942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.940980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.941011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.941055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.941103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.941146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.941183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.941220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.941260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.941300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.941335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.941378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.941408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.941453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.941486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.941526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.941566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.941607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.941645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.941683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.941724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.941764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.941801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.941843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.941877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.941923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.941967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.724 [2024-07-25 01:11:26.942013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.942057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.942104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.942147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.942189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.942241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.942284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.942332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.942384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.942433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.942479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.942529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.942569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.942609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.942656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.942700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.942746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.942794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.942837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.942882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.942923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.942951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.942987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.943035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.943078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.943116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.943601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.943641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.943678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.943718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.943760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.943795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.943834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.943872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.943910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.943953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.943994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.944039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.944087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.944133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.944175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.944219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.944268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.944310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.944355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.944401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.944443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.944490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.944534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.944590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.944632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.944676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.944724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.944766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.944811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.944858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.944904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.944945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.944995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.945038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.945085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.945129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.945173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.945220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.945265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.945308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.945348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.945388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.945428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.945466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.945509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.945552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.945583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.945625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.945664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.945703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.945748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.945794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.945833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.945873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.945910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.945949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.945989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.946034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.946081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.946115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.946150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.946187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.946222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.946261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.946451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.946492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.946528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.946570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.946610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.946652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.725 [2024-07-25 01:11:26.946696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.946739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.946784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.947149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.947197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.947242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.947290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.947332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.947375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.947420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.947462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.947505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.947547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.947590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.947633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.947677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.947717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.947757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.947797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.947832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.947873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.947910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.947946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.947986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.948024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.948071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.948112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.948153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.948189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.948225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.948262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.948303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.948345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.948379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.948420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.948457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.948493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.948533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.948574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.948609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.948651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.948691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.948729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.948768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.948809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.948849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.948890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.948935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.948976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.949031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.949079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.949127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.949175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.949221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.949265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.949310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.949356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.949399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.949919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.949969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.950017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.950067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.950112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.950155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.950194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.950239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.950285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.950316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.950351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.950391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.950425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.950466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.950505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.950543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.950588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.950625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.950664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.950699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.950743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.950790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.950829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.950868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.950906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.950953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.726 [2024-07-25 01:11:26.950996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.951031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.951074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.951111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.951152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.951194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.951232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.951269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.951311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.951349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.951391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.951431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.951477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.951523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.951567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.951612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.951655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.951698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.951747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.951793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.951836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.951879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.951928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.951972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.952019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.952071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.952113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.952156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.952203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.952247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.952291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.952336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.952376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.952420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.952465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.952510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.952552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.952598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.952765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.952810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.952846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.952882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.952920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.952957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.952995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.953035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.953082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.953406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.953446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.953485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.953524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.953566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.953605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.953645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.953683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.953718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.953753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.953802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.953844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.953888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.953929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.953972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.954024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.954074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.954117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.954161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.954208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.954250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.954291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.954330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.954359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.954400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.954440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.954476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.954515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.954551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.954585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.954621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.954661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.954697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.954737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.954788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.954828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.954871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.954914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.954956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.954999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.955041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.955088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.955139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.955180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.955221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.727 [2024-07-25 01:11:26.955265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.955311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.955351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.955398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.955444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.955490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.955535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.955576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.955623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.955663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.956226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.956270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.956307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.956348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.956387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.956426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.956463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.956504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.956545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.956595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.956638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.956681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.956729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.956771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.956816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.956858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.956901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.956943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.956991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.957033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.957078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.957120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.957160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.957202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.957248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.957291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.957334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.957383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.957422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.957467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.957511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.957553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.957599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.957643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.957687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.957739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.957782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.957828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.957877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.957922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.957964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.958016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.958066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.958113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.958156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.958196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.958238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.958283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.958325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.958368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.958409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.958454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.958499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.958553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.958594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.958639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.958680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.958720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.958766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.958806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.958846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.958886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.958930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.958966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.959201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.959242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.959281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.959321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.959357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.959393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.959436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.959478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.959808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.959856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.959900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.959941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.959982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.960021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.960061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.960098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:04.728 [2024-07-25 01:11:26.960137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.960175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.960212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.728 [2024-07-25 01:11:26.960248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.960288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.960325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.960365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.960404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.960441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.960477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.960514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.960551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.960594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.960642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.960687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.960729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.960784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.960825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.960867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.960911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.960956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.960999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.961046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.961091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.961138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.961180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.961226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.961280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.961323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.961381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.961424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.961464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.961495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.961533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.961569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.961604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.961645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.961689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.961726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.961764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.961802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.961839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.961879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.961915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.961957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.961987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.962023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.962637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.962695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.962740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.962783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.962830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.962876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.962916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.962963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.963010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.963060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.963102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.963159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.963200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.963244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.963291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.963332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.963375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.963422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.963466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.963508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.963558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.963598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.963641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.963684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.963728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.963774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.963820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.963864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.963907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.963958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.964002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.964052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.964102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.964146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.964191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.964235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.729 [2024-07-25 01:11:26.964276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.964314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.964352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.964393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.964426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.964461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.964504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.964542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.964585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.964633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.964670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.964712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.964751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.964791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.964831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.964868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.964905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.964941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.964972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.965011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.965053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.965093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.965132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.965173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.965210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.965246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.965285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.965325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.965495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.965541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.965584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.965625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.965663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.965707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.965745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.965787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.966137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.966184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.966232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.966273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.966317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.966363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.966406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.966448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.966493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.966538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.966580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.966623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.966660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.966700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.966737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.966778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.966825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.966864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.966903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.966943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.966974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.967010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.967048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.967086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.967131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.967172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.967210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.967248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.967284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.967324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.967360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.967402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.967444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.967483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.967518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.967557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.967600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.967643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.967686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.967728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.967784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.967829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.967872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.967916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.967959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.968002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.968049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.968095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.968137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.968192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.968237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.968285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.968325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.968365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.968402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.968441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.968929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.730 [2024-07-25 01:11:26.968973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.969012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.969062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.969102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.969139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.969176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.969215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.969253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.969290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.969328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.969369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.969409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.969455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.969501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.969542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.969593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.969638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.969683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.969734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.969780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.969826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.969865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.969899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.969936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.969975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.970016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.970054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.970095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.970133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.970174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.970218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.970262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.970316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.970357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.970402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.970446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.970490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.970535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.970574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.970617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.970657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.970696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.970751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.970796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.970840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.970891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.970934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.970977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.971024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.971072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.971119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.971163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.971208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.971253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.971298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.971340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.971383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.971427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.971467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.971506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.971542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.971577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.971620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.971793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.971828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.971866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.971908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.971946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.971987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.972023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.972070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.972452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.972493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.972532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.972578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.972615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.972654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.972692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.972735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.972778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.972821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.972864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.972916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.972959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.973004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.973057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.973100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.973143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.973185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.973230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.973276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.973327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.973369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.973413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.973458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.973502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.731 [2024-07-25 01:11:26.973544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.973591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.973635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.973677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.973726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.973771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.973813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.973861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.973911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.973953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.973996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.974050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.974100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.974146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.974189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.974232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.974274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.974315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.974359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.974402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.974444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.974486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.974531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.974571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.974610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.974647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.974679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.974718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.974757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.974794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.975295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.975341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.975378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.975415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.975461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.975506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.975542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.975594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.975631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.975670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.975709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.975750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.975790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.975829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.975865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.975906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.975947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.975992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.976037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.976086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.976130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.976171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.976214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.976256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.976300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.976344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.976388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.976431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.976478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.976522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.976563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.976610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.976654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.976702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.976740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.976783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.976821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.976860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.976890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.976928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.976969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.977007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.977056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.977095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.977133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.977171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.977209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.977251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.977294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.977329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.977366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.977407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.977444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.977478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.977518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.977555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.977595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.977636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.977676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.977717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.977755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.977790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.977832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.977878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.732 [2024-07-25 01:11:26.978063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.978110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.978155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.978200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.978239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.978285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.978333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.978378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.978724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.978771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.978814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.978857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.978899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.978938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.978969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.979007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.979047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.979085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.979122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.979161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.979201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.979239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.979278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.979317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.979362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.979401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.979441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.979477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.979514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.979548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.979586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.979622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.979657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.979694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.979730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.979768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.979813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.979853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.979890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.979926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.979965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.980011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.980051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.980095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.980143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.980192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.980236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.980277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.980320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.980363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.980412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.980454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.980498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.980539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.980585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.980631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.980676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.980727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.980770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.980812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.980863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.980903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.980947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.980996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.981508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.981559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.981603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.981644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.981683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.981724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.981766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.981804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.981844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.981884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.981920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.981957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.981993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.982031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.982074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.982117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.982169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.982210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.982247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.982285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.982325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.982366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.982396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.982436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.982471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.982511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.982549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.733 [2024-07-25 01:11:26.982582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.982618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.982663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.982702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.982743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.982785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.982826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.982867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.982907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.982945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.982984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.983028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.983080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.983127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.983170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.983213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.983260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.983301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.983343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.983387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.983434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.983481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.983523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.983565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.983603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.983649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.983690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.983727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.983761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.983801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.983837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.983875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.983920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.983964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.984003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.984049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.984090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.984273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.984312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.984352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.984391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.984432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.984465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.984499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.984541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.984949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.984988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.985038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.985085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.985130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.985173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.985218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.985260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.985308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.985345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.985388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.985441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.985487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.985530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.985580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.985622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.985666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.985712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.985756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.985802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.985853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.985894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.985946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.985990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.986034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.986080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.986123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.986164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.986208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.986251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.986294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.986340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.986383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.986436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.734 [2024-07-25 01:11:26.986480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.986521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.986576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.986618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.986655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.986696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.986731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.986771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.986817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.986853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.986892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.986929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.986965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.987003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.987041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.987084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.987121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.987163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.987211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.987252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.987293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.987331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.987900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.987947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.987984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.988020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.988064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.988106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.988154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.988198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.988240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.988289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.988333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.988374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.988416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.988464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.988506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.988552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.988595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.988636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.988679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.988724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.988767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.988811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.988852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.988896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.988939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.988981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.989033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.989082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.989129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.989175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.989219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.989263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.989312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.989353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.989398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.989450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.989493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.989537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.989584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.989628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.989673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.989725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.989767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.989807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.989846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.989892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.989935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.989976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.990015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.990063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.990104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.990141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.990170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.990211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.990250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.990288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.990325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.990365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.990401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.990440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.990477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.990514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.990552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.990594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.990806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.990844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.990880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.990915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.735 [2024-07-25 01:11:26.990951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.990988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.991025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.991404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.991453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.991496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.991541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.991590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.991633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.991675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.991720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.991771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.991813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.991853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.991883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.991923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.991962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.992000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.992040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.992086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.992125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.992164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.992204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.992238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.992272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.992314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.992350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.992389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.992429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.992459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.992497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.992537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.992573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.992608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.992639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.992666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.992698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.992736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.992779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.992816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.992853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.992894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.992932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.992974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.993023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.993069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.993112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.993161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.993216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.993255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.993300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.993346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.993391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.993431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.993479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.993521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.993567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.993614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.993656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.993695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.994188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.994234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.994264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.994303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.994339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.994375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.994415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.994454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.994497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.994540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.994575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.994617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.994657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.994699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.994740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.994783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.994840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.994885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.994927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.994973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.995018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.995064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.995108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.995148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.995188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.995234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.995275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.995318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.995364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.995401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.995441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.995490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.995533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.995576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.995631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.995674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.736 [2024-07-25 01:11:26.995717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.995769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.995815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.995858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.995905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.995947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.995993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.996036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.996084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.996129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.996171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.996216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.996263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.996308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.996351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.996394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.996437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.996481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.996530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.996571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.996615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.996655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.996692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.996729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.996772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.996818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.996867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.996907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.997105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.997143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.997184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.997223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.997263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.997299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.997343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.997669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.997711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.997751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.997783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.997824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.997863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.997903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.997944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.997984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.998025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.998067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.998106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.998140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.998191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.998237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.998282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.998330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.998374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.998419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.998465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.998519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.998561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.998603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.998653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.998693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.998732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.998763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.998798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.998839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.998877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.998921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.998965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.999005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.999054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.999089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.999124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.999163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.999199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.999238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.999271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.999301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.999328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.999355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.999382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.999411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.999449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.999490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.999519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.999546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.999573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.999601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.999629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.999656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.999683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.999710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.999738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:26.999775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:27.000186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.737 [2024-07-25 01:11:27.000226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.000265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.000301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.000339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.000379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.000421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.000465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.000510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.000553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.000596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.000642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.000685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.000728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.000777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.000819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.000865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.000908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.000953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.000999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.001048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.001092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.001134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.001180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.001222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.001265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.001308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.001349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.001395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.001441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.001483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.001526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.001579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.001619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.001662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.001707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.001756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.001797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.001838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.001878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.001915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.001953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.002000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.002046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.002085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.002123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.002159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.002191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.002228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.002266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.002304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.002343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.002383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.002420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.002460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.002497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.002535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.002568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.002605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.002642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.002687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.002728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.002768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.002809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.002987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.003025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.003071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.003116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.003163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.003204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.003561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.003606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.003655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.003696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.003741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.003798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.003842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.003883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.003928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.003969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.004012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.004058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.004103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.004148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.004196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.004249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.004294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.004339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.004387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.004431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.004479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.004529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.004574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.004612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.738 [2024-07-25 01:11:27.004649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.004679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.004720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.004759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.004795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.004832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.004868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.004906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.004946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.004991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.005035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.005089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.005127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.005165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.005195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.005236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.005274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.005313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.005350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.005391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.005429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.005464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.005502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.005540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.005580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.005619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.005658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.005696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.005734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.005769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.005807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.005853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.005893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.005941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.006440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.006485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.006532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.006578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.006623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.006664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.006708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.006749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.006794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.006841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.006888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.006932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.006974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.007017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.007065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.007107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.007153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.007199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.007236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.007272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.007302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.007344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.007382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.007420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.007459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.007497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.007535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.007577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.007623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.007669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.007707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.007744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.007781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.007811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.007846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.007883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.007923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.007960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.008001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.008040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.008085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.008124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.008166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.008203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.008242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.739 [2024-07-25 01:11:27.008282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.008320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.008367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.008412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.008454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.008497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.008541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.008582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.008624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.008666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.008709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.008756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.008805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.008845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.008897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.008941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.008983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.009027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.009077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.009275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.009321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.009359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.009396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.009437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.009476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:04.740 [2024-07-25 01:11:27.009804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.009844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.009883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.009930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.009968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.010005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.010049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.010088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.010129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.010167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.010204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.010245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.010286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.010332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.010379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.010427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.010471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.010517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.010561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.010618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.010658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.010698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.010737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.010773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.010820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.010852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.010888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.010926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.010958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.010998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.011035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.011076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.011118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.011161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.011203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.011252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.011300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.011341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.011383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.011429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.011471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.011513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.011557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.011600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.011646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.011692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.011730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.011770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.011817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.011862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.011920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.011967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.012010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.012065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.012110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.012153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.012204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.012248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.012733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.012781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.012824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.012862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.012899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.012942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.740 [2024-07-25 01:11:27.012978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.013016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.013061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.013092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.013132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.013171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.013209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.013252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.013293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.013332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.013371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.013412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.013451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.013489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.013527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.013568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.013608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.013653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.013695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.013739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.013791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.013834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.013876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.013922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.013962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.014006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.014056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.014102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.014145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.014190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.014233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.014282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.014321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.014364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.014408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.014450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.014496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.014539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.014586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.014633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.014679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.014725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.014771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.014812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.014855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.014900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.014938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.014985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.015031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.015071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.015112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.015154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.015194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.015232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.015270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.015309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.015348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.015385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.015553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.015594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.015638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.015671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.015711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.016133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.016180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.016230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.016278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.016322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.016366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.016411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.016457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.016501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.016547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.016595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.016640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.016702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.016747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.016791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.016833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.016881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.016924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.016968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.017010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.017058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.017100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.017150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.017195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.017238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.017278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.017328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.741 [2024-07-25 01:11:27.017368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.017405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.017447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.017481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.017520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.017555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.017598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.017643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.017683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.017722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.017762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.017803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.017839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.017877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.017923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.017963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.018001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.018032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.018074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.018124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.018165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.018205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.018242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.018278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.018315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.018350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.018390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.018432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.018472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.018508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.018542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.018586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.019081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.019132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.019174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.019218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.019267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.019312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.019359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.019403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.019446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.019493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.019537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.019585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.019627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.019670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.019716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.019759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.019808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.019847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.019888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.019925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.019959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.020000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.020036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.020085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.020124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.020164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.020201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.020237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.020284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.020323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.020362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.020401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.020435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.020473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.020510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.020547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.020584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.020623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.020659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.020695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.020735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.020775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.020815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.020852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.020891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.020927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.020968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.021009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.021057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.021101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.021150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.021193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.021238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.021283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.021324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.021369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.021413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.021466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.021509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.021553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.021602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.021656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.021699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.021745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.742 [2024-07-25 01:11:27.021934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.021979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.022021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.022074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.022120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.022464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.022505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.022543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.022579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.022626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.022673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.022704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.022741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.022780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.022813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.022859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.022901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.022940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.022980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.023020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.023065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.023106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.023144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.023181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.023216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.023262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.023305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.023348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.023392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.023437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.023485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.023527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.023571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.023615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.023658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.023700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.023743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.023782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.023813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.023849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.023889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.023928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.023976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.024011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.024052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.024091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.024131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.024166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.024209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.024252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.024299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.024343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.024388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.024431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.024482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.024527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.024571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.024623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.024668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.024714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.024759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.024808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.024856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.025331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.025378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.025419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.025459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.025505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.025543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.025583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.025622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.025658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.025698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.025744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.025782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.025823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.025863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.025902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.025937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.025980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.026022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.026074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.026115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.026170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.026215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.026259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.026307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.026353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.743 [2024-07-25 01:11:27.026396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.026438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.026481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.026529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.026575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.026621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.026667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.026715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.026764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.026808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.026852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.026907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.026955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.027000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.027050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.027093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.027134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.027171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.027211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.027249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.027289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.027324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.027369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.027405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.027443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.027484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.027523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.027563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.027602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.027640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.027677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.027716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.027754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.027794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.027833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.027873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.027911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.027952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.027992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.028166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.028217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.028267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.028310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.028355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.028722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.028768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.028823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.028864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.028910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.028954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.028994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.029031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.029073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.029124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.029154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.029197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.029235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.029274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.029311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.029351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.029389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.029435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.029477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.029515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.029552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.029592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.029629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.029668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.029707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.029750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.029787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.029837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.029876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.029924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.029970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.030013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.030062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.030106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.030149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.030191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.030236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.030285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.744 [2024-07-25 01:11:27.030324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.030363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.030401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.030432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.030471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.030506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.030545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.030584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.030620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.030661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.030699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.030739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.030777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.030818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.030859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.030897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.030939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.030996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.031039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.031090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.031134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.031630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.031679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.031725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.031773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.031816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.031856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.031890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.031925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.031964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.032000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.032040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.032084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.032123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.032162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.032204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.032243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.032298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.032330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.032368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.032407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.032446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.032490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.032527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.032568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.032606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.032648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.032686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.032725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.032762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.032806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.032848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.032899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.032945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.032989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.033036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.033093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.033137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.033182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.033234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.033279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.033323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.033370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.033415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.033455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.033506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.033554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.033598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.033641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.033688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.033732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.033775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.033829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.033873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.033918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.033962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.034005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.034050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.034100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.034141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.034186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.034231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.034272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.034320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.034365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.034546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.034590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.034635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.034679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.034723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.035080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.035122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.035159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.035195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.035235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.035281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.745 [2024-07-25 01:11:27.035324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.035361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.035399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.035435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.035469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.035510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.035551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.035581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.035616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.035653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.035694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.035731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.035772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.035811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.035851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.035898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.035939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.035978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.036010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.036047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.036084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.036124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.036160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.036196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.036233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.036274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.036322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.036364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.036404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.036438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.036473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.036517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.036560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.036607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.036653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.036696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.036739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.036783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.036828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.036875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.036918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.036960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.037004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.037053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.037098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.037140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.037191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.037232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.037275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.037317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.037356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.037387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.037425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.037901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.037949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.037985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.038028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.038072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.038113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.038155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.038196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.038236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.038277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.038318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.038353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.038387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.038433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.038478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.038521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.038571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.038613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.038658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.038706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.038747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.038792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.038836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.038879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.038925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.038971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.039015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.039065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.039107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.039147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.039198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.039240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.039283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.039330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.039372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.039412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.039451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.746 [2024-07-25 01:11:27.039488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.039521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.039562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.039600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.039645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.039683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.039719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.039757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.039797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.039832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.039872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.039916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.039952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.039992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.040031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.040078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.040118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.040166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.040217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.040259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.040301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.040344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.040391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.040436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.040480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.040523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.040569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.040754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.040797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.040843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.040887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.041241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.041281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.041319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.041360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.041402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.041435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.041470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.041510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.041550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.041589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.041630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.041668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.041712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.041763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.041808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.041851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.041896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.041941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.041987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.042035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.042087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.042133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.042178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.042226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.042268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.042310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.042356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.042402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.042446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.042491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.042529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.042574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.042613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.042649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.042683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.042724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.042770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.042813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.042851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.042892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.042933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.042969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.043008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.043053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.043096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.043138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.043176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.043213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.043252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.043301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.043344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.043388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.043436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.043481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.043526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.043570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.043619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.043664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.043710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.043754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.044245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.747 [2024-07-25 01:11:27.044292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.044324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.044364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.044401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.044438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.044477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.044512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.044557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.044598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.044628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.044666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.044702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.044744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.044783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.044821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.044867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.044906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.044951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.044997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.045041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.045097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.045140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.045183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.045226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.045275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.045323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.045368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.045415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.045458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.045503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.045547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.045597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.045637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.045682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.045726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.045769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.045811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.045856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.045901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.045948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.045990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.046028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.046079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.046119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.046149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.046186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.046221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.046258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.046294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.046339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.046385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.046428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.046467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.046504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.046543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.046584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.046623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.046660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.046700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.046739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.046784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.046829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.046877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.047074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.047121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.047166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.047213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.047570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.047619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.047661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.047700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.047739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.047787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.047822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.047862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.047899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.748 [2024-07-25 01:11:27.047934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.047978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.048018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.048061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.048098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.048136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.048180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.048223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.048266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.048306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.048346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.048390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.048426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.048472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.048518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.048564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.048612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.048657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.048701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.048747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.048794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.048836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.048880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.048928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.048979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.049022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.049069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.049113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.049159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.049203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.049247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.049287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.049325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.049361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.049392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.049430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.049467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.049516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.049554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.049594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.049636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.049676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.049713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.049749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.049785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.049825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.049865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.049903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.049940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.049980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.050036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.050543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.050591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.050633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.050680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.050721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.050766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.050808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.050848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.050890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.050930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.050970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.051009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.051054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.051096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.051134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.051170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.051206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.051248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.051294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.051332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.051368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.051406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.051447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.051489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.051527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.051566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.051602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.051644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.051682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.051723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.051764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.051810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.051853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.051908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.051953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.051996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.052046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.052091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.052134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.052176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.052220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.052261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.749 [2024-07-25 01:11:27.052305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.052354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.052402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.052444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.052486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.052530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.052570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.052616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.052654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.052691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.052728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.052773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.052809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.052846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.052885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.052922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.052965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.053007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.053048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.053088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.053128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.053168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.053362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.053399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.053441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.053813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.053862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.053908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.053954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.053999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.054053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.054101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.054145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.054186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.054230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.054281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.054324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.054370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.054413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.054456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.054500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.054547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.054588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.054630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.054673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.054716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.054761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.054807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.054852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.054897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.054941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.054984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.055029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.055077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.055125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.055174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.055219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.055260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.055306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.055345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.055376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.055412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.055452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.055487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.055523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.055560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.055600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.055635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.055678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.055719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.055758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.055796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.055842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.055877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.055913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.055949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.055988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.056025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.056072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.056118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.056158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.056199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.056230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.056268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.056307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.056838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.056886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.056936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.056978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.057024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.057084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.057129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.057178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.057226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.057267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.057310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.750 [2024-07-25 01:11:27.057356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.057400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.057445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.057486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.057536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:04.751 [2024-07-25 01:11:27.057585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.057630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.057678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.057729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.057776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.057818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.057861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.057910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.057951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.057990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.058030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.058073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.058116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.058155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.058190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.058220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.058257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.058297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.058334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.058379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.058425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.058466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.058505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.058543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.058583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.058619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.058664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.058695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.058732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.058766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.058803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.058842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.058881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.058925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.058966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.059009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.059053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.059100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.059137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.059178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.059219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.059266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.059310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.059351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.059396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.059449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.059494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.059540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.059725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.059770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.059815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.060181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.060222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.060265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.060297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.060333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.060368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.060411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.060445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.060481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.060520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.060560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.060597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.060637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.060678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.060724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.060760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.060796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.060834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.060874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.060918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.060960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.061004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.061061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.061104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.061148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.061196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.061240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.061286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.061332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.751 [2024-07-25 01:11:27.061371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.061410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.061453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.061495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.061530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.061568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.061609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.061648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.061685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.061728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.061770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.061813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.061855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.061906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.061947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.061991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.062036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.062090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.062133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.062176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.062221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.062266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.062311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.062356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.062399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.062445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.062488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.062538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.062581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.062625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.062671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.063162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.063204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.063237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.063275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.063311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.063346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.063384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.063429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.063469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.063510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.063545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.063586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.063619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.063659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.063694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.063736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.063779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.063818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.063855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.063895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.063936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.063976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.064013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.064052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.064093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.064134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.064179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.064232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.064277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.064318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.064364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.064406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.064449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.064490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.064533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.064577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.064629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.064672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.064716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.064761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.064801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.064848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.064892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.064935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.064979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.065022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.065069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.065113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.065158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.065199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.065253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.065294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.065337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.065386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.065431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.065476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.065522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.065568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.065611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.065651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.065686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.065718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.065755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.065793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.752 [2024-07-25 01:11:27.065967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.066007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.066054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.066386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.066424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.066460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.066496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.066533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.066572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.066613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.066652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.066697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.066737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.066775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.066817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.066858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.066901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.066948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.066991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.067036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.067088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.067144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.067185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.067229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.067275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.067329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.067375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.067418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.067465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.067508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.067553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.067598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.067646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.067695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.067737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.067784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.067826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.067871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.067912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.067958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.068006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.068048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.068087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.068126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.068165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.068197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.068240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.068282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.068321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.068360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.068396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.068435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.068480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.068522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.068567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.068607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.068646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.068688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.068722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.068760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.068796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.068836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.068875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.069400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.069449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.069491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.069535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.069583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.069628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.069673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.069719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.069768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.069812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.069854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.069904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.753 [2024-07-25 01:11:27.069950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.069998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.070050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.070097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.070141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.070183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.070225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.070265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.070295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.070330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.070369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.070409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.070446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.070487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.070524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.070567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.070615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.070661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.070710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.070749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.070787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.070817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.070855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.070892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.070928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.070968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.071006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.071048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.071096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.071135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.071172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.071215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.071255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.071295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.071337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.071379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.071422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.071471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.071514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.071560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.071605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.071652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.071694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.071740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.071792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.071833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.071877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.071921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.071970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.072017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.072061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.072109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.072307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.072355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.072400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.072760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.072807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.072849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.072886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.072923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.072968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.073005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.073039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.073081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.073117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.073153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.073190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.073227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.073266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.073304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.073342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.073379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.073417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.073455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.073496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.073532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.073572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.073614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.073652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.073692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.073727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.073763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.073797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.073835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.073875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.073914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.073958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.073996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.074035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.074075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.074111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.074157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.074201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.074245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.074292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.074335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.074377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.074423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.074470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.074510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.074553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.074595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.074636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.754 [2024-07-25 01:11:27.074682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.074727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.074771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.074822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.074865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.074910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.074967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.075009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.075056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.075096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.075136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.075179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.075674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.075707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.075745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.075783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.075823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.075864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.075902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.075945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.075982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.076021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.076065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.076102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.076145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.076180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.076229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.076272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.076316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.076362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.076410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.076458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.076501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.076548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.076590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.076632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.076679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.076728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.076757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.076802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.076845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.076883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.076922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.076961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.076995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.077034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.077080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.077119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.077153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.077198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.077238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.077287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.077330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.077372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.077416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.077456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.077507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.077548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.077590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.077637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.077677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.077722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.077770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.077813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.077856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.077901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.077946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.077992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.078034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.078076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.078112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.078147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.078192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.078238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.078276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.078309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.078480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.078528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.078564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.079011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.079068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.079112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.079156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.079201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.079245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.079286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.079333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.079376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.079421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.079465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.079506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.079553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.079593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.079637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.079683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.079726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.079769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.079814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.079854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.755 [2024-07-25 01:11:27.079898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.079943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.079990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.080031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.080081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.080124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.080174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.080217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.080261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.080303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.080343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.080381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.080416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.080453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.080491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.080536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.080580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.080615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.080650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.080686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.080725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.080762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.080800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.080837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.080877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.080915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.080952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.080990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.081028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.081062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.081102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.081141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.081178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.081217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.081255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.081290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.081327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.081362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.081403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.081440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.081932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.081980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.082024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.082078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.082122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.082164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.082208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.082249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.082294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.082337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.082387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.082428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.082470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.082517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.082559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.082601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.082644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.082686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.082726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.082757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.082795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.082836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.082873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.082907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.082947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.082988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.083025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.083067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.083110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.083149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.083185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.083235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.083265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.083306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.083343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.083383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.083420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.083454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.083490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.083529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.083565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.083607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.083646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.083684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.083721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.083758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.083796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.083830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.083878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.083918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.083965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.084007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.084057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.084102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.084147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.084189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.084233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.084275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.084318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.084368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.084415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.084459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.756 [2024-07-25 01:11:27.084503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.084548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.084741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.084795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.084839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.085227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.085278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.085316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.085355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.085395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.085433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.085467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.085510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.085544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.085577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.085618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.085654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.085692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.085730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.085765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.085801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 true 00:11:04.757 [2024-07-25 01:11:27.085838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.085877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.085913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.085950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.085989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.086026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.086060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.086101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.086135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.086171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.086208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.086247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.086288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.086324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.086363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.086404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.086449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.086486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.086523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.086558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.086594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.086636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.086683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.086726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.086767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.086819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.086859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.086910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.086949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.086994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.087037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.087084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.087130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.087170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.087211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.087253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.087291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.087337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.087379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.087422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.087458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.087488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.087527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.087562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.088061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.088104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.088141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.088180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.088226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.088268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.088310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.088353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.088392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.088428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.088462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.088502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.088541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.088583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.088622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.088664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.088718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.088757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.088800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.088843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.088886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.088943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.088981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.089020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.089070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.089113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.089163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.089203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.089252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.089292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.089335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.089377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.089412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.089449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.089487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.089526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.089558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.089593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.089631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.757 [2024-07-25 01:11:27.089670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.089708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.089745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.089781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.089816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.089854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.089892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.089930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.089959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.089997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.090031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.090072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.090108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.090149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.090189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.090241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.090284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.090327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.090368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.090409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.090456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.090497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.090539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.090581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.090620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.090800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.090841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.090880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.091233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.091273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.091312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.091347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.091385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.091423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.091463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.091497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.091536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.091575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.091613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.091652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.091698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.091741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.091784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.091829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.091868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.091912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.091956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.092000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.092046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.092088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.092132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.092176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.092215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.092256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.092297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.092339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.092394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.092433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.092476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.092514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.092559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.092601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.092645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.092693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.092730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.092770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.092808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.092850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.092884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.092918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.092959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.093000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.093047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.093093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.093132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.093169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.093206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.093245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.093283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.093321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.093354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.758 [2024-07-25 01:11:27.093389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.093426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.093464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.093504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.093547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.093589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.093624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.093665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.094166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.094212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.094256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.094296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.094338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.094380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.094428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.094472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.094513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.094556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.094598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.094639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.094680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.094718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.094776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.094822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.094869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.094914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.094956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.094999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.095053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.095092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.095139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.095179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.095217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.095255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.095299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.095341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.095383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.095441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.095484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.095520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.095563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.095608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.095652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.095681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.095717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.095756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.095790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.095829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.095867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.095904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.095940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.095979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.096015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.096056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.096096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.096132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.096163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.096198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.096236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.096273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.096312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.096351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.096388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.096423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.096457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.096496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.096533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.096571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.096606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.096642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.096681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.096720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.096906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.096950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.096999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.097374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.097424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.097465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.097508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.097550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.097600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.097642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.097684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.097728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.097776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.097816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.097861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.097909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.097949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.097991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.098035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.098079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.098110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.098143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.098184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.098216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.098253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.098290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.098326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.098365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.098402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.098442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.759 [2024-07-25 01:11:27.098481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.098519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.098549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.098587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.098619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.098656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.098696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.098732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.098770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.098808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.098848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.098887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.098925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.098962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.098996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.099039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.099086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.099138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.099181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.099224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.099266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.099321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.099364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.099420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.099464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.099507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.099549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.099591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.099634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.099684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.099727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.099770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.099814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.099857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.100353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.100396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.100435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.100473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.100513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.100551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.100591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.100632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.100673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.100714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.100752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.100784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.100827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.100863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.100898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.100939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.100980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.101018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.101062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.101106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.101148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.101185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 01:11:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 789518 00:11:04.760 [2024-07-25 01:11:27.101226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.101272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.101313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.101356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.101398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.101440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.101484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.101528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.101572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 01:11:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:04.760 [2024-07-25 01:11:27.101614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.101654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.101696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.101738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.101779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.101811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.101846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.101884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.101925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.101966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.102010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.102054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.102093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.102135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.102177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.102219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.102263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.102311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.102351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.102399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.102443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.102488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.102530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.102582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.102623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.102669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.102713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.102757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.102800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.102846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.102889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.102932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.102978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.103162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.103209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.103583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.103628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.760 [2024-07-25 01:11:27.103671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.103723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.103766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.103812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.103859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.103912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.103955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.103998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.104052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.104096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.104142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.104180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.104218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.104252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.104286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.104327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.104367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.104406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.104442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.104480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.104520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.104555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.104590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.104627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.104671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.104713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.104750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.104786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.104825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.104861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.104902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.104941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.104977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.105024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.105071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.105110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.105148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.105194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.105224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.105260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.105295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.105331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.105371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.105411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.105450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.105488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.105526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.105571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.105613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.105660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.105707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.105750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.105795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.105840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.105881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.105924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.105972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.106015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.106064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.106567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.106611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.106658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.106696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.106738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.106771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.106809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.106846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.106886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.106926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.106965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.107006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.107051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.107093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:04.761 [2024-07-25 01:11:27.107125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.107163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.761 [2024-07-25 01:11:27.107204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.107242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.107279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.107314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.107353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.107392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.107421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.107460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.107505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.107541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.107580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.107618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.107656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.107695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.107732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.107777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.107819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.107869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.107914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.107956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.108004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.108052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.108095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.108136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.108183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.108230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.108272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.108323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.108365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.108409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.108453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.108497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.108538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.108578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.108614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.108657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.108705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.108744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.108782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.108819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.108852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.108882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.108918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.108953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.108996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.109032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.109074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.109112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.109302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.109344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.109710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.109756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.109801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.109845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.109884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.109929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.109972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.110017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.110070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.110118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.110174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.110219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.110262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.110311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.110354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.110394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.110445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.110491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.110535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.110583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.110628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.762 [2024-07-25 01:11:27.110669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.110717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.110759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.110803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.110847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.110887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.110929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.110973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.111016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.111065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.111104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.111147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.111193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.111230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.111269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.111310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.111345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.111378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.111417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.111452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.111496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.111539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.111577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.111613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.111653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.111691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.111726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.111764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.111803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.111840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.111873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.111911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.111962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.112009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.112051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.112092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.112129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.112167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.112202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.112242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.112747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.112799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.112841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.112883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.112936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.112977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.113022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.113069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.113115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.113158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.113201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.113244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.113292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.113336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.113381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.113422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.113461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.113497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.113527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.113570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.113609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.113647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.113685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.113723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.113762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.113799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.113835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.113872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.113910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.113946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.113985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.114016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.114058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.114097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.114133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.114171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.114209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.114248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.114291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.114329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.114367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.114407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.114446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.763 [2024-07-25 01:11:27.114489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.114533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.114581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.114624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.114668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.114726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.114769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.114813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.114859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.114901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.114945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.114988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.115037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.115086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.115131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.115184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.115229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.115271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.115325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.115367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.115411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.115592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.115638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.115980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.116025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.116069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.116107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.116148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.116191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.116228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.116265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.116304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.116341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.116379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.116422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.116465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.116500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.116539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.116575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.116613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.116650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.116685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.116726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.116765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.116806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.116845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.116885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.116921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.116962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.116990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.117027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.117071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.117109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.117154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.117191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.117227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.117265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.117310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.117355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.117401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.117445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.117490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.117536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.117587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.117634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.117678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.117724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.117769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.117820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.117866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.117911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.117953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.118001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.118047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.118091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.118137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.118178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.118220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.118264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.118309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.118354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.118399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.118443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.118481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.118945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.118984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.119024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.119074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.119115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.119145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.119181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.119221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.119263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.119302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.764 [2024-07-25 01:11:27.119339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.119376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.119414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.119458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.119492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.119537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.119578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.119624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.119674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.119716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.119761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.119809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.119852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.119896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.119941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.119986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.120027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.120071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.120114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.120157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.120207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.120247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.120292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.120332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.120372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.120413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.120453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.120482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.120518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.120560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.120595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.120634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.120669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.120707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.120745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.120783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.120823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.120853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.120890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.120932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.120974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.121015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.121057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.121097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.121136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.121185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.121233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.121276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.121324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.121362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.121398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.121435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.121470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.121502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.121692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.121729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.122103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.122153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.122198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.122243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.122283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.122326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.122367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.122404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.122446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.122498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.122543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.122585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.122634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.122675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.122719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.122765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.122808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.122851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.122903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.122945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.122988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.123033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.123077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.123119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.123156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.123200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.123246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.123277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.123315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.123356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.123390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.123429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.123465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.123504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.123542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.123583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.123629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.123671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.123706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.123745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.765 [2024-07-25 01:11:27.123783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.123820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.123859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.123901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.123938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.123973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.124014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.124059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.124099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.124140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.124182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.124225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.124276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.124321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.124363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.124407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.124456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.124502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.124549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.124591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.124634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.125146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.125200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.125245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.125286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.125327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.125368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.125411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.125456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.125497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.125539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.125579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.125625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.125663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.125708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.125759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.125799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.125845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.125890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.125933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.125973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.126017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.126061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.126111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.126150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.126192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.126234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.126276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.126313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.126351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.126395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.126434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.126476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.126505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.126545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.126585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.126629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.126676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.126718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.126756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.126796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.126831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.126870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.126909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.126944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.126981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.127015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.127062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.127103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.127139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.127176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.127215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.127253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.127292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.127328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.127365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.127397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.127432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.127471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.127508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.127550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.127589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.127629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.127667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.127705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.127890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.127943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.128309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.128354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.128397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.128438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.128482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.128523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.128553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.128593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.128632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.766 [2024-07-25 01:11:27.128668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.128704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.128742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.128781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.128822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.128855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.128890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.128927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.128962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.129002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.129046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.129087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.129119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.129151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.129195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.129233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.129273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.129312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.129349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.129387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.129432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.129470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.129508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.129545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.129585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.129634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.129675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.129720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.129767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.129804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.129850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.129898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.129943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.129985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.130038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.130088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.130134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.130177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.130224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.130274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.130316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.130358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.130404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.130444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.130487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.130534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.130576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.130616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.130667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.130710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.130755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.130799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.131272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.131317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.131364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.131403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.131443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.131473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.131511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.131550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.131588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.131627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.131668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.131706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.131746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.131786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.131822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.131861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.131899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.131934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.131977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.132019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.132067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.132109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.132152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.132196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.132249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.132293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.132338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.132383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.132425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.132467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.132513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.767 [2024-07-25 01:11:27.132556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.132598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.132644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.132688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.132734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.132776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.132817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.132860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.132912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.132955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.132999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.133046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.133094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.133139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.133184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.133230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.133274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.133326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.133367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.133411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.133454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.133493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.133539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.133582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.133625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.133673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.133714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.133758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.133806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.133849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.133891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.133936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.133979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.134167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.134209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.134556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.134598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.134637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.134677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.134721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.134767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.134808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.134844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.134880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.134917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.134952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.134985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.135020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.135061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.135106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.135148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.135185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.135224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.135262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.135299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.135338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.135374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.135404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.135444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.135482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.135521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.135563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.135604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.135639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.135676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.135715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.135755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.135790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.135830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.135875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.135916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.135964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.136006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.136051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.136096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.136141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.136186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.136228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.136275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.136321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.136372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.136411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.136448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.136485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.136524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.136564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.136605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.136641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.136680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.136718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.136755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.136787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.136826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.136863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.136902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.136940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.136985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.768 [2024-07-25 01:11:27.137559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.137605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.137650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.137696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.137740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.137784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.137828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.137873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.137919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.137963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.138004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.138052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.138094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.138140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.138184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.138227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.138276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.138320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.138360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.138404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.138447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.138492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.138541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.138583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.138627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.138678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.138720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.138762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.138808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.138856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.138899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.138946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.138988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.139032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.139082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.139126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.139169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.139210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.139251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.139291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.139335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.139371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.139413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.139453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.139492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.139531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.139575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.139605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.139646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.139686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.139734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.139777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.139820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.139859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.139899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.139939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.139978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.140018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.140057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.140097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.140128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.140167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.140209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.140247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.140433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.140471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.140826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.140874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.140919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.140963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.141001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.141050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.141101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.141142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.141188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.141233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.141274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.141318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.141365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.141405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.141446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.141489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.141530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.769 [2024-07-25 01:11:27.141575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.141623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.141664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.141707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.141746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.141785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.141815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.141852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.141895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.141932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.141970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.142009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.142058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.142097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.142140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.142183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.142228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.142263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.142305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.142336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.142372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.142407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.142446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.142485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.142524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.142562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.142601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.142642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.142679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.142718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.142756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.142797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.142837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.142880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.142925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.142969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.143016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.143066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.143110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.143154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.143199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.143245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.143291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.143335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.143380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.143875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.143925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.143967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.144011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.144061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.144105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.144144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.144191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.144231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.144279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.144317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.144356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.144395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.144426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.144466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.144504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.144543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.144582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.144621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.144663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.144701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.144742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.144778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.144815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.144856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.144899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.144929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.144967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.145003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.145046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.145083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.145122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.145168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.145205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.145244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.145281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.145319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.145359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.145396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.145435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.145471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.145518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.145558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.145601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.145652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.145694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.145736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.770 [2024-07-25 01:11:27.145783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.145830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.145870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.145911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.145956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.145997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.146049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.146094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.146139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.146184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.146225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.146270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.146319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.146360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.146403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.146446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.146485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.146685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.147034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.147078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.147110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.147153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.147192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.147229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.147271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.147310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.147340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.147367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.147406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.147449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.147487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.147517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.147545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.147574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.147603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.147631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.147665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.147704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.147743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.147781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.147819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.147860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.147897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.147937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.147982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.148032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.148078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.148123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.148164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.148206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.148250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.148302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.148340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.148381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.148418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.148462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.148503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.148539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.148576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.148609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.148647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.148687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.148722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.148763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.148800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.148835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.148878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.148920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.148964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.149004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.149050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.149093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.149136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.149178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.149220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.149267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.149308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.149351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.149394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.149439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.149943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.149994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.150038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.150091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.150140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.150184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.150225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.150273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.150316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.150356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.150397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.150437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.150476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.150513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.150553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.150582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.150620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.771 [2024-07-25 01:11:27.150660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.150697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.150734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.150770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.150812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.150855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.150893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.150932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.150972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.151010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.151052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.151084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.151119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.151155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.151198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.151242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.151277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.151313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.151353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.151394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.151436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.151473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.151511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.151551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.151588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.151629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.151668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.151710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.151749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.151791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.151834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.151876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.151920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.151961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.152007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.152049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.152095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.152134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.152176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.152227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.152270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.152313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.152359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.152401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.152444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.152488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.152532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.152722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.153093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.153140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.153180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.153219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.153259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.153289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.153323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.153360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.153398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.153438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.153479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.153518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.153555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.153594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.153629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.153666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.153702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.153737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.153767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.153809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.153846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.153881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.153916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.153955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.153993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.154034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.154073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.154112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.154149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.154190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.154227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.154265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.154299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.154344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.154388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.154431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.154478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.154524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.154566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.154609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.154654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.154698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.154739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.154783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.154826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.154877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.154920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.154962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.155016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.772 [2024-07-25 01:11:27.155063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.155107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.155151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.155192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.155233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.155262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.155298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.155341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.155374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.155410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.155447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.155482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.155518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.155989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.156036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.156077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.156116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.156153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.156191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.156226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:04.773 [2024-07-25 01:11:27.156278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.156319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.156365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.156411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.156455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.156499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.156543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.156585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.156629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.156675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.156714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.156756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.156805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.156852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.156898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.156939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.156985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.157027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.157073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.157122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.157165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.157209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.157252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.157292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.157338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.157381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.157424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.157469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.157514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.157557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.157597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.157635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.157673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.157716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.157747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.157789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.157825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.157861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.157898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.157937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.157975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.158009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.158050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.158093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.158135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.158173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.158211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.158244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.158278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.158314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.158351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.158392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.158426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.158465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.158506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.158547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.158584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.158776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.159134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.159185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.159227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.159273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.159316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.159359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.159403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.159443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.159488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.159531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.159574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.159617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.159664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.159707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.159747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.159795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.159835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.159876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.773 [2024-07-25 01:11:27.159923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.159967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.160008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.160051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.160091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.160127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.160161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.160196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.160233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.160270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.160307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.160347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.160384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.160422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.160460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.160495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.160530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.160575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.160622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.160660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.160701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.160730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.160769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.160809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.160842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.160879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.160915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.160957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.160996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.161039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.161077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.161117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.161157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.161194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.161227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.161276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.161315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.161360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.161408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.161453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.161497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.161537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.161582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.161626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.162126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.162173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.162214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.162261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.162305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.162340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.162370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.162412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.162454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.162489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.162525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.162567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.162609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.162653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.162696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.162734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.162773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.162813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.162850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.162888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.162918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.162954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.162990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.163024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.163068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.163105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.163146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.163184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.163221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.163259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.163296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.163333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.163369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.163400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.163432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.163476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.163516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.774 [2024-07-25 01:11:27.163560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.163604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.163649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.163694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.163739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.163791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.163833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.163878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.163922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.163967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.164016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.164064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.164114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.164156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.164198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.164242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.164281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.164328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.164374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.164420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.164466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.164508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.164551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.164605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.164646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.164697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.164740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.164922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.165273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.165308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.165346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.165382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.165419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.165456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.165494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.165530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.165567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.165604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.165635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.165676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.165714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.165751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.165787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.165825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.165861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.165900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.165941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.165976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.166020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.166066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.166112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.166156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.166202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.166244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.166289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.166343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.166386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.166427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.166469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.166511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.166557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.166599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.166643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.166690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.166729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.166774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.166823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.166866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.166910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.166961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.167005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.167055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.167100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.167140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.167179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.167217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.167262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.167307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.167338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.167376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.167415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.167456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.167495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.167535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.167575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.167614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.167652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.167688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.167725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.167756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.168317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.168363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.168407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.168452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.168494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.168536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.168578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.168624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.775 [2024-07-25 01:11:27.168673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.776 [2024-07-25 01:11:27.168717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.776 [2024-07-25 01:11:27.168762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.776 [2024-07-25 01:11:27.168804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.776 [2024-07-25 01:11:27.168847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.776 [2024-07-25 01:11:27.168892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.776 [2024-07-25 01:11:27.168942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.776 [2024-07-25 01:11:27.168988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.776 [2024-07-25 01:11:27.169031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.776 [2024-07-25 01:11:27.169087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.776 [2024-07-25 01:11:27.169132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.776 [2024-07-25 01:11:27.169176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.776 [2024-07-25 01:11:27.169224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.776 [2024-07-25 01:11:27.169258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.776 [2024-07-25 01:11:27.169292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.776 [2024-07-25 01:11:27.169331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.776 [2024-07-25 01:11:27.169370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.776 [2024-07-25 01:11:27.169408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.776 [2024-07-25 01:11:27.169453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.776 [2024-07-25 01:11:27.169498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.776 [2024-07-25 01:11:27.169537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.776 [2024-07-25 01:11:27.169576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.776 [2024-07-25 01:11:27.169615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.776 [2024-07-25 01:11:27.169653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.776 [2024-07-25 01:11:27.169693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.776 [2024-07-25 01:11:27.169734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.776 [2024-07-25 01:11:27.169771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.776 [2024-07-25 01:11:27.169812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.776 [2024-07-25 01:11:27.169852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.776 [2024-07-25 01:11:27.169891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.776 [2024-07-25 01:11:27.169926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.776 [2024-07-25 01:11:27.169965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.776 [2024-07-25 01:11:27.170004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.776 [2024-07-25 01:11:27.170045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.776 [2024-07-25 01:11:27.170087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.776 [2024-07-25 01:11:27.170127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.776 [2024-07-25 01:11:27.170165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.776 [2024-07-25 01:11:27.170205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.776 [2024-07-25 01:11:27.170242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.776 [2024-07-25 01:11:27.170280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.776 [2024-07-25 01:11:27.170339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.776 [2024-07-25 01:11:27.170384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.776 [2024-07-25 01:11:27.170430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:04.776 [2024-07-25 01:11:27.170470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.170511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.170562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.170605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.170648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.170694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.170737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.170782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.170828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.170870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.170913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.170959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.171008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.171201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.171557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.171606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.171652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.171697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.171742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.171784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.171828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.171870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.171913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.171956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.171998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.172039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.172082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.172120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.172154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.172193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.172241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.172271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.172307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.172345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.172385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.172426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.172466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.172505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.172540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.172577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.172621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.172671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.172714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.172754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.172786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.172826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.172864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.172902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.172937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.172974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.173012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.173051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.173087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.173127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.173168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.173208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.173246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.173284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.173322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.173358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.173394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.173447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.173490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.173537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.173581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.173626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.057 [2024-07-25 01:11:27.173669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.173711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.173755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.173796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.173837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.173881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.173929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.173971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.174017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.174064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.174552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.174598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.174637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.174668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.174704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.174743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.174782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.174823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.174859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.174900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.174941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.174980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.175017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.175064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.175104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.175137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.175167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.175211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.175251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.175293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.175332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.175376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.175416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.175461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.175510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.175554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.175600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.175645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.175687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.175737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.175777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.175819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.175858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.175895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.175933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.175972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.176008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.176047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.176080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.176118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.176152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.176202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.176244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.176283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.176317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.176352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.176406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.176452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.176495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.176540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.176581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.176627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.176673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.176719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.176765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.176805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.176851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.176893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.176946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.176985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.177027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.177072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.177115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.177161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.177348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.177707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.177754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.177798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.177847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.177890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.177930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.177977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.178022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.178067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.178120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.178161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.178203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.178245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.178289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.178337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.178378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.178419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.178465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.178508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.178556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.058 [2024-07-25 01:11:27.178607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.178647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.178686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.178736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.178771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.178812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.178848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.178891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.178939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.178978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.179014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.179056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.179090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.179123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.179160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.179199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.179235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.179273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.179311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.179349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.179386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.179422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.179467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.179510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.179547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.179577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.179615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.179653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.179693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.179731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.179768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.179804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.179843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.179888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.179936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.179965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.180003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.180039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.180081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.180117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.180154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.180194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.180698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.180750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.180797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.180841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.180886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.180927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.180959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.180999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.181033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.181076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.181119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.181165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.181209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.181257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.181302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.181341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.181373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.181415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.181453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.181489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.181528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.181565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.181604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.181642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.181686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.181722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.181761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.181798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.181835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.181864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.181897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.181941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.181977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.182013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.182057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.182098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.182141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.182182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.182231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.182273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.182318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.182363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.182402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.182445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.182498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.182541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.182585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.182635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.182675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.059 [2024-07-25 01:11:27.182717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.182766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.182807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.182852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.182898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.182935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.182978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.183027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.183073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.183118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.183159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.183206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.183252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.183292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.183338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.183520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.183877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.183920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.183957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.183998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.184037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.184085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.184134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.184164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.184206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.184245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.184282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.184321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.184360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.184400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.184437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.184476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.184517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.184559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.184597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.184628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.184670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.184713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.184756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.184798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.184843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.184890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.184932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.184972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.185015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.185062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.185105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.185148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.185194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.185244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.185285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.185328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.185374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.185419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.185461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.185503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.185548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.185588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.185631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.185674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.185716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.185758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.185798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.185844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.185889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.185935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.185975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.186018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.186074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.186117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.186160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.186201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.186244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.186291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.186336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.186377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.186423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.186466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.186944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.186990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.187028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.187074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.187112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.187149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.187187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.187229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.187269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.187307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.060 [2024-07-25 01:11:27.187347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.187377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.187415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.187449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.187483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.187521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.187559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.187596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.187633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.187673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.187713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.187748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.187787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.187822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.187860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.187899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.187938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.187976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.188016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.188058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.188102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.188143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.188184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.188227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.188272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.188319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.188358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.188399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.188449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.188497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.188537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.188585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.188629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.188672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.188713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.188753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.188798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.188838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.188875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.188915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.188952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.188991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.189029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.189078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.189116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.189157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.189195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.189230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.189270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.189303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.189345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.189384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.189419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.189460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.189629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.190082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.190126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.190164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.190207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.190250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.190302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.190345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.190388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.190434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.190476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.190519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.190567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.190611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.190654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.190699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.190742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.190786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.190829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.190874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.190922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.190963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.191005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.191058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.191103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.191146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.191191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.191235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.191278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.191325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.191368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.191413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.191457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.191505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.191546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.191591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.191634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.191677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.061 [2024-07-25 01:11:27.191719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.191761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.191802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.191858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.191901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.191944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.191987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.192028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.192078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.192123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.192169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.192212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.192253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.192302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.192345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.192391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.192430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.192468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.192507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.192544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.192591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.192641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.192675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.192709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.192747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.193203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.193248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.193290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.193338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.193375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.193413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.193452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.193491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.193528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.193569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.193609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.193640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.193675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.193711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.193748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.193786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.193825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.193868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.193909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.193947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.193983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.194016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.194058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.194104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.194150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.194193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.194240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.194288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.194332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.194376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.194422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.194463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.194507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.194554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.194593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.194635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.194675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.194722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.194764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.194809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.194850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.194895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.194937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.194984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.195036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.195083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.195124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.195166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.195207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.195254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.195292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.062 [2024-07-25 01:11:27.195334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.195372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.195406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.195443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.195479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.195517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.195553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.195590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.195626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.195664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.195703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.195742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.195782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.195958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.196340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.196384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.196425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.196469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.196508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.196546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.196586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.196629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.196678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.196722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.196767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.196811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.196853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.196898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.196943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.196981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.197022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.197082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.197127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.197174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.197225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.197269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.197311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.197352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.197395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.197440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.197485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.197529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.197571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.197611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.197653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.197703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.197754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.197795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.197837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.197881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.197925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.197964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.198001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.198050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.198086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.198122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.198160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.198197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.198239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.198277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.198315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.198355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.198393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.198431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.198470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.198507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.198549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.198590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.198619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.198657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.198694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.198733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.198771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.198807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.198844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.198882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.199392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.199441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.199486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.199534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.199577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.199619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.199666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.199708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.199748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.199796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.199834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.199877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.199924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.199968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.200009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.200055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.200100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.200141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.063 [2024-07-25 01:11:27.200179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.200209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.200248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.200283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.200321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.200359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.200395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.200433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.200474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.200511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.200549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.200585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.200625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.200667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.200706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.200742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.200778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.200821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.200861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.200899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.200939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.200979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.201019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.201062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.201101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.201140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.201175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.201215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.201256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.201298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.201341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.201383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.201425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.201477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.201519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.201562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.201608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.201651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.201696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.201742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.201787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.201831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.201873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.201916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.201963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.202005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.202194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.202553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.202598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.202641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.202682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.202721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.202750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.202792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.202829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.202873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.202918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.202963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.203004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.203050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.203088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.203127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.203167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.203209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.203256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.203286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.203328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.203365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.203406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.203441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.203476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.203511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.203548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.203584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.203624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.203666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.203703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.203741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.203778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.203814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.203855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.203898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.203942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.203986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.204032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.204082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.204126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.204168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.064 [2024-07-25 01:11:27.204214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.204253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.204296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.204349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.204391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.204431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.204475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.204520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.204566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.204621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.204664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.204706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.204750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.204792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.204839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.204876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.204919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.204965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.205011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.205059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.205096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:05.065 [2024-07-25 01:11:27.205567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.205608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.205647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.205686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.205716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.205751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.205788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.205827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.205864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.205901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.205939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.205978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.206019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.206060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.206101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.206142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.206180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.206223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.206268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.206310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.206353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.206390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.206431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.206487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.206529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.206571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.206615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.206658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.206702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.206745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.206787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.206834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.206877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.206923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.206965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.207008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.207052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.207097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.207139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.207185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.207226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.207266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.207306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.207336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.207372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.207412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.207452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.207497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.207537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.207576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.207617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.207655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.207690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.207727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.207763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.207798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.207835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.207866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.207900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.207934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.207969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.208004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.208040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.208082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.208278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.208651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.208700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.208745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.208788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.208832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.208877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.208920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.208973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.209015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.065 [2024-07-25 01:11:27.209061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.209104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.209148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.209191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.209234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.209277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.209319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.209361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.209405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.209448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.209488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.209533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.209578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.209628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.209670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.209713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.209762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.209803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.209844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.209881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.209919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.209957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.209997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.210027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.210066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.210104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.210142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.210181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.210227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.210272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.210322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.210360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.210396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.210436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.210474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.210512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.210544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.210582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.210623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.210664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.210701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.210733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.210773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.210816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.210853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.210889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.210932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.210970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.211011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.211054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.211094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.211132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.211174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.211659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.211711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.211754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.211807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.211852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.211897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.211946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.211987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.212030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.212082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.212126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.212170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.212206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.212242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.212284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.212325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.212370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.212414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.212456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.212494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.212531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.212570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.212614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.212657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.212697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.212729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.212769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.212806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.066 [2024-07-25 01:11:27.212844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.212889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.212926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.212963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.213002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.213052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.213096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.213140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.213179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.213221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.213262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.213307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.213351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.213396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.213439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.213487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.213533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.213577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.213626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.213675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.213723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.213768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.213818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.213865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.213908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.213954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.214005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.214051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.214096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.214136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.214175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.214210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.214241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.214283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.214324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.214363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.214541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.214883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.214928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.214971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.215009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.215052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.215090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.215125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.215170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.215213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.215258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.215301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.215347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.215391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.215435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.215483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.215526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.215578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.215620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.215662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.215708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.215751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.215797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.215846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.215889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.215934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.215983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.216027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.216074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.216121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.216162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.216209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.216261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.216305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.216350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.216392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.216433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.216482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.216521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.216560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.216599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.216631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.216669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.216711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.216747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.216791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.216834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.216871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.216910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.216949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.216988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.217031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.217078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.217120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.217153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.067 [2024-07-25 01:11:27.217193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.217229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.217268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.217306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.217345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.217382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.217423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.217463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.217944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.217991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.218034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.218084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.218128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.218175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.218225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.218266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.218313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.218356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.218399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.218441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.218485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.218532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.218587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.218638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.218681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.218728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.218771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.218815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.218866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.218910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.218955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.218999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.219039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.219083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.219120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.219159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.219190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.219234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.219270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.219307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.219342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.219380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.219419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.219464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.219504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.219543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.219582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.219622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.219660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.219690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.219732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.219771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.219815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.219860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.219895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.219932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.219969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.220008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.220054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.220099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.220138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.220175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.220213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.220254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.220300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.220341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.220387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.220432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.220477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.220524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.220567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.220622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.220811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.221192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.221242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.221284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.221324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.221363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.221398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.221429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.221467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.221507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.221543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.221583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.221626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.221669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.221711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.221749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.221794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.221837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.221877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.221919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.221950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.221987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.068 [2024-07-25 01:11:27.222021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.222066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.222103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.222138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.222174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.222212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.222253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.222294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.222332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.222368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.222401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.222434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.222473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.222508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.222551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.222597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.222641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.222691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.222735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.222780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.222821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.222862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.222908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.222954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.223003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.223051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.223098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.223143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.223188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.223229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.223270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.223316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.223360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.223401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.223451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.223499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.223543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.223589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.223631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.223672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.223723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.223766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.224234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.224280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.224319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.224361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.224411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.224452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.224488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.224529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.224569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.224607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.224648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.224687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.224727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.224769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.224808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.224848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.224890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.224928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.224970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.225012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.225058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.225105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.225147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.225189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.225235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.225275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.225319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.225362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.225404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.225448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.225494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.225537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.225582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.225626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.225677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.225720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.225764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.225809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.225852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.225894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.225945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.225990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.226035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.226082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.226125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.226167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.226211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.226255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.226295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.226337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.226376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.226416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.226462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.226501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.069 [2024-07-25 01:11:27.226539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.226577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.226614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.226652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.226691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.226729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.226768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.226807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.226839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.226874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.227049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.227486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.227534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.227576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.227621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.227664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.227714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.227759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.227801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.227855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.227897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.227941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.227985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.228033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.228081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.228127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.228179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.228224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.228269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.228318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.228368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.228410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.228454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.228498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.228543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.228587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.228629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.228679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.228725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.228773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.228816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.228859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.228907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.228954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.229011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.229063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.229109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.229156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.229201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.229247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.229288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.229334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.229370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.229410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.229449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.229491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.229534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.229570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.229600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.229636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.229675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.229714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.229758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.229803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.229842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.229882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.229925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.229963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.230006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.230049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.230086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.230124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.230163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.230201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.230740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.230789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.230833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.230885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.230928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.230970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.231019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.231073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.231118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.231163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.231197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.231235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.231277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.231318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.231360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.231399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.231436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.231478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.070 [2024-07-25 01:11:27.231512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.231555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.231593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.231631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.231666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.231703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.231733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.231768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.231805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.231855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.231888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.231917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.231946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.231974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.232002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.232030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.232073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.232111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.232148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.232187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.232227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.232263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.232300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.232340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.232383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.232426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.232473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.232515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.232560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.232604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.232647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.232694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.232734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.232774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.232810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.232854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.232894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.232926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.232967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.233004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.233040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.233085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.233127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.233163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.233200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.233246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.233754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.233804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.233849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.233897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.233938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.233982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.234030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.234076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.234119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.234164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.234207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.234248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.234293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.234346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.234388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.234434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.071 [2024-07-25 01:11:27.234477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.234518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.234561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.234615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.234658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.234702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.234748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.234800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.234844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.234887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.234935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.234977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.235021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.235067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.235117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.235159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.235202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.235252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.235295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.235338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.235388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.235429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.235475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.235526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.235564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.235606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.235653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.235696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.235738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.235780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.235816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.235860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.235905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.235943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.235983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.236022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.236062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.236102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.236136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.236179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.236217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.236265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.236314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.236361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.236400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.236439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.236481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.236982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.237025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.237068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.237101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.237140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.237183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.237223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.237260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.237300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.237338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.237375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.237413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.237453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.237490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.237531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.237567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.237603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.237645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.237693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.237735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.237780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.237822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.237864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.237907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.237951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.237995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.238036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.238073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.238111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.238152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.238193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.238240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.238279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.238317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.238356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.238385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.238424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.238465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.238503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.238539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.238578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.238608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.238647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.238685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.238722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.238762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.072 [2024-07-25 01:11:27.238790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.238818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.238846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.238875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.238905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.238934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.238962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.238998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.239037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.239083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.239126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.239162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.239204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.239244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.239280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.239322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.239360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.239405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.239906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.239953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.239996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.240041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.240089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.240136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.240182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.240228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.240272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.240314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.240360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.240404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.240454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.240497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.240540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.240599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.240642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.240685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.240741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.240786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.240829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.240873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.240916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.240963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.241006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.241056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.241105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.241149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.241195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.241237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.241281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.241322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.241361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.241403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.241448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.241489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.241524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.241568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.241612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.241642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.241679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.241714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.241751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.241792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.241829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.241867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.241912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.241952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.241992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.242031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.242071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.242110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.242139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.242176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.242219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.242268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.242310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.242351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.242388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.242425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.242467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.242504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.242543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.243037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.243091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.243137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.243186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.243230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.243273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.243318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.243364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.243406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.243455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.243497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.073 [2024-07-25 01:11:27.243539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.243583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.243625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.243670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.243712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.243753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.243795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.243837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.243882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.243931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.243972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.244015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.244067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.244109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.244152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.244191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.244230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.244268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.244304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.244343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.244382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.244419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.244456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.244493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.244530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.244571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.244610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.244647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.244686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.244725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.244761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.244793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.244830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.244865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.244904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.244944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.244986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.245025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.245066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.245112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.245154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.245192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.245228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.245267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.245310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.245358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.245400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.245444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.245493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.245539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.245581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.245627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.245672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.246152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.246198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.246238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.246287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.246330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.246368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.246407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.246443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.246487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.246531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.246571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.246609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.246640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.246679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.246714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.246753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.246789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.246826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.246861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.246890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.246917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.246956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.246997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.247032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.247073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.247113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.247158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.247200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.247235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.247277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.247321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.247365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.247409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.247454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.247496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.247538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.247586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.247630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.247685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.247729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.074 [2024-07-25 01:11:27.247775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.247817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.247861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.247905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.247948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.247990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.248033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.248080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.248125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.248170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.248217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.248262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.248301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.248343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.248385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.248429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.248471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.248516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.248558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.248589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.248627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.248665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.248701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.249231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.249279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.249324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.249368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.249411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.249454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.249497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.249543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.249589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.249634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.249675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.249719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.249766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.249809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.249851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.249898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.249947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.249989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.250035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.250088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.250132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.250178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.250226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.250269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.250311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.250363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.250410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.250452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.250498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.250544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.250586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.250633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.250675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.250715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.250759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.250803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.250842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.250884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.250914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.250954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.250995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.251049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.251091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.251128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.251167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.251205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.251249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.251296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.251339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.251380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.251414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.251449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.251491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.251534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.251580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.251620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.251656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.251690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.251729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.251769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.251808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.251846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.251889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.251930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.252423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.252473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.252519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.075 [2024-07-25 01:11:27.252563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.252606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.252652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.252693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.252736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.252779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.252829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.252869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 Message suppressed 999 times: [2024-07-25 01:11:27.252906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 Read completed with error (sct=0, sc=15) 00:11:05.076 [2024-07-25 01:11:27.252946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.252976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.253012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.253053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.253089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.253128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.253169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.253211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.253249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.253286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.253322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.253358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.253396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.253432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.253468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.253508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.253552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.253589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.253630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.253669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.253708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.253748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.253787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.253823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.253861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.253902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.253946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.253988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.254034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.254082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.254125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.254172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.254220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.254265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.254310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.254359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.254402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.254447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.254491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.254546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.254590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.254632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.254677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.254722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.254764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.254807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.254855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.254899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.254944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.254993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.255039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.255523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.255565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.255603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.255642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.255684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.255724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.076 [2024-07-25 01:11:27.255761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.255792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.255836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.255874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.255912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.255950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.255995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.256031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.256072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.256107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.256145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.256184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.256222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.256259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.256298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.256338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.256376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.256405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.256448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.256486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.256529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.256571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.256608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.256658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.256700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.256747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.256799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.256844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.256886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.256932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.256980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.257027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.257081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.257127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.257170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.257215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.257257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.257299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.257346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.257391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.257433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.257474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.257517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.257556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.257595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.257638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.257684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.257720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.257751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.257793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.257835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.257871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.257910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.257948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.257984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.258026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.258070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.258107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.258674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.258722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.258776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.258821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.258865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.258915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.258959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.259002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.259050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.259098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.259141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.259183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.259225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.259269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.259312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.259355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.259397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.259441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.259482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.259529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.259585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.259630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.259675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.259726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.259767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.259809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.259854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.259898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.259944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.259990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.260032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.260084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.260129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.260172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.260218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.260258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.260305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.260365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.260418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.077 [2024-07-25 01:11:27.260460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.260505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.260550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.260590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.260636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.260685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.260731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.260776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.260818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.260857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.260902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.260943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.260982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.261015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.261057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.261095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.261135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.261174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.261213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.261251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.261290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.261327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.261366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.261403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.261874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.261917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.261953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.261991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.262028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.262066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.262104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.262143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.262185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.262226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.262261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.262297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.262338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.262380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.262421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.262456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.262504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.262550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.262599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.262641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.262688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.262729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.262773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.262821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.262866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.262912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.262956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.262999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.263047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.263095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.263137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.263184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.263226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.263259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.263299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.263339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.263380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.263422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.263463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.263502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.263543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.263589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.263630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.263669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.263711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.263745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.263784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.263821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.263862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.263902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.263938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.263976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.264015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.264057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.264098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.264138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.264182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.264216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.264257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.264302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.264348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.264401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.264445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.264492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.078 [2024-07-25 01:11:27.264992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.265049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.265093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.265139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.265182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.265234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.265276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.265321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.265370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.265425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.265469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.265512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.265559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.265603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.265646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.265692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.265735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.265777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.265827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.265872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.265912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.265952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.265993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.266039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.266082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.266120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.266158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.266189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.266229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.266268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.266308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.266347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.266385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.266421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.266461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.266499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.266538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.266576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.266613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.266650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.266684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.266724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.266765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.266801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.266839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.266878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.266915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.266949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.266988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.267026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.267066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.267107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.267148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.267186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.267225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.267262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.267303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.267343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.267379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.267415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.267453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.267497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.267537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.268033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.268087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.268130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.268186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.268229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.268275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.268322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.268353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.268389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.268436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.268473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.268512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.268557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.268597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.268634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.268671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.268714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.268759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.268801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.268832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.268867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.268905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.268945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.268983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.269022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.269066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.269107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.269145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.269184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.269225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.269266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.269303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.079 [2024-07-25 01:11:27.269342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.269382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.269424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.269468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.269513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.269557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.269599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.269643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.269697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.269740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.269782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.269830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.269873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.269922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.269967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.270009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.270053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.270100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.270143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.270184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.270232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.270269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.270317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.270359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.270395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.270442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.270487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.270527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.270569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.270599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.270636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.270672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.271226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.271282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.271325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.271373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.271417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.271466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.271506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.271546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.271605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.271649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.271694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.271736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.271779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.271824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.271882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.271923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.271968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.272011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.272060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.272104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.272148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.272191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.272236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.272272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.272309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.272353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.272403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.272441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.272475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.272508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.272547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.272582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.272620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.272662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.272701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.272743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.272782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.272817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.272866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.272910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.272952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.272983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.273020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.273064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.273105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.273144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.273183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.273225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.273263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.273306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.273344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.273382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.273419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.273462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.273517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.273559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.273600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.080 [2024-07-25 01:11:27.273653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.273692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.273736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.273783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.273825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.273867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:05.081 01:11:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:05.081 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:05.081 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:05.081 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:05.081 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:05.081 [2024-07-25 01:11:27.468092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.468159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.468202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.468241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.468281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.468325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.468366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.468409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.468453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.468495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.468538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.468588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.468630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.468670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.468712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.468756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.468802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.468844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.468885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.468933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.468974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.469021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.469067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.469111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.469152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.469194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.469240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.469282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.469325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.469364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.469409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.469447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.469486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.469525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.469568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.469612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.469655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.469693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.469731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.469766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.469801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.469844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.469888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.469922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.469961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.470003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.470038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.470079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.470119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.470157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.470202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.470244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.470288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.470330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.470373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.470419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.470452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.470489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.470526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.081 [2024-07-25 01:11:27.470564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.470609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.470645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.470675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.471222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.471274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.471317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.471358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.471403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.471444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.471489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.471538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.471579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.471623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.471663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.471704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.471746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.471787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.471832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.471877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.471919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.471973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.472013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.472057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.472099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.472139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.472180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.472222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.472266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.472303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.472338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.472377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.472412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.472444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.472478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.472519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.472555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.472599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.472644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.472687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.472726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.472765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.472803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.472839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.472877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.472915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.472950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.472981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.473019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.473058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.473096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.473134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.473168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.473203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.473240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.473279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.473315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.473348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.473386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.473421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.473461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.473500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.473539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.473577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.473616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.473665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.473708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.473751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.474246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.474291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.474337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.474379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.474425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.474466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.474512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.474556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.474598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.474645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.474683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.474721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.474758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.474796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.474836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.474871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.474915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.474959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.475004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.475050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.475082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.475120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.475158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.475195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.475234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.475271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.475304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.475341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.475382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.082 [2024-07-25 01:11:27.475419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.475461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.475497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.475537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.475573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.475610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.475647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.475691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.475729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.475771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.475808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.475853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.475894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.475936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.475981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.476023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.476068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.476111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.476153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.476205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.476245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.476288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.476335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.476379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.476424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.476472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.476511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.476554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.476594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.476637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.476684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.476724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.476765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.476807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.477294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.477337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.477374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.477413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.477450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.477489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.477518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.477552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.477594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.477635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.477678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.477715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.477753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.477791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.477826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.477867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.477908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.477944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.477985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.478029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.478077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.478126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.478172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.478217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.478257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.478290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.478324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.478363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.478405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.478444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.478485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.478526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.478562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.478603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.478641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.478685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.478725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.478766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.478808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.478852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.478894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.478938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.478980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.479021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.479071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.479113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.479157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.479198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.479238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.479279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.479323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.479368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.479410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.479452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.083 [2024-07-25 01:11:27.479500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.479544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.479589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.479628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.479666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.479704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.479741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.479778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.479821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.479856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.480385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.480427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:05.084 [2024-07-25 01:11:27.480465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.480504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.480534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.480573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.480615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.480652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.480693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.480734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.480775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.480824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.480865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.480910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.480951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.480993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.481035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.481081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.481123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.481167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.481211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.481249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.481294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.481336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.481380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.481419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.481461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.481502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.481557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.481598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.481645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.481689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.481731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.481773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.481814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.481856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.481898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.481948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.481991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.482036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.482087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.482124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.482160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.482196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.482232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.482272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.482310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.482347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.482383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.482418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.482461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.482503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.482546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.482583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.482629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.482676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.482725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.482769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.482813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.482855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.482887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.482923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.482957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.483459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.483508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.483559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.483601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.483644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.483689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.483736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.483784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.483825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.483870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.483914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.483952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.483992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.484032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.484066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.484103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.484143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.484180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.484219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.484264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.484311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.484355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.084 [2024-07-25 01:11:27.484399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.484436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.484478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.484515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.484551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.484589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.484624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.484665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.484705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.484749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.484800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.484844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.484888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.484932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.484975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.485016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.485062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.485098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.485132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.485172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.485211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.485252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.485285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.485324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.485362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.485404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.485447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.485489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.485530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.485571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.485612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.485657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.485701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.485744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.485795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.485839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.485879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.485925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.485968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.486013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.486062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.486109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.486595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.486639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.486682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.486726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.486756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.486791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.486827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.486869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.486907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.486944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.486985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.487028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.487079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.487116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.487151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.487187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.487228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.487267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.487311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.487348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.487387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.487424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.487469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.487519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.487562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.487602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.487648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.487691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.487732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.487776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.487817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.487865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.487907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.487953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.487995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.488036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.488082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.488127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.488178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.488221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.488264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.488306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.488349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.488394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.488440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.488483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.488526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.488570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.488610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.488655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.488703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.085 [2024-07-25 01:11:27.488745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.488786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.488826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.488862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.488908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.488949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.488989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.489028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.489068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.489097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.489137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.489180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.489684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.489728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.489768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.489808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.489847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.489897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.489941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.489981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.490029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.490079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.490121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.490167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.490208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.490253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.490297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.490341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.490391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.490435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.490479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.490525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.490573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.490618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.490661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.490699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.490736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.490779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.490820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.490850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.490886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.490925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.490960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.490997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.491034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.491081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.491121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.491154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.491189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.491228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.491267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.491305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.491343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.491383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.491421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.491465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.491511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.491554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.491601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.491643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.491687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.491734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.491778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.491816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.491859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.491897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.491931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.491973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.492014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.492055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.492095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.492136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.492175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.492215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.492254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.492297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.492793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.492837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.492885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.086 [2024-07-25 01:11:27.492929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.492972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.493018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.493063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.493109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.493152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.493193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.493237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.493282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.493323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.493364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.493400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.493442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.493480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.493519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.493553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.493585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.493624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.493659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.493698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.493738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.493776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.493813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.493851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.493885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.493922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.493961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.493996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.494038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.494081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.494119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.494163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.494201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.494249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.494293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.494335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.494380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.494422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.494463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.494506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.494558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.494600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 01:11:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:11:05.087 [2024-07-25 01:11:27.494644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.494693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.494732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.494774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.494820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.494862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.494909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.494953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 01:11:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:11:05.087 [2024-07-25 01:11:27.494998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.495045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.495090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.495134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.495176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.495218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.495268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.495312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.495356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.495408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.495884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.495926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.495964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.496008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.496056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.496098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.496137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.496174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.496213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.496246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.496284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.496325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.496365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.496405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.496447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.496485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.496524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.496563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.496610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.496651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.496694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.496745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.496789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.496830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.496877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.496920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.496961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.497009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.497065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.497111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.497153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.497196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.497238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.087 [2024-07-25 01:11:27.497283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.497329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.497373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.497416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.497453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.497494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.497541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.497577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.497612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.497653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.497691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.497738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.497785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.497824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.497862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.497898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.497942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.497981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.498020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.498061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.498100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.498138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.498175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.498216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.498263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.498307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.498347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.498393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.498435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.498477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.498523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.499076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.499117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.499156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.499199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.499245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.499289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.499332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.499378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.499422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.499463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.499505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.499546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.499589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.499643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.499685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.499728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.499779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.499823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.499864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.499911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.499951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.499996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.500041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.500089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.500135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.500177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.500215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.500254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.500289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.500332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.500362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.500402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.500441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.500476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.500517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.500556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.500594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.500633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.500673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.500713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.500750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.500788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.500826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.500868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.500907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.500945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.500982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.501021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.501069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.501111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.501156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.501199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.501246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.501293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.501337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.501383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.501426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.501469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.501513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.501556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.088 [2024-07-25 01:11:27.501598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.501647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.501695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.502194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.502244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.502288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.502337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.502381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.502418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.502455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.502493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.502534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.502575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.502616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.502653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.502683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.502720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.502760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.502795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.502832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.502873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.502910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.502947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.502985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.503028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.503070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.503107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.503147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.503184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.503226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.503265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.503303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.503344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.503383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.503422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.503472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.503519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.503564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.503606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.503650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.503700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.503746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.503789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.503837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.503880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.503921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.503965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.504016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.504062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.504107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.504153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.504192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.504229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.504269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.504306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.504339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.504371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.504410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.504447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.504485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.504522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.504560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.504598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.504631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.504669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.504707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.504747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.505263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.505315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.505356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.505396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.505426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.505464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.505506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.505549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.505592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.505633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.505672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.505711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.505749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.505788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.505825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.505862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.505901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.505950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.505994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.506036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.506089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.506137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.506178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.506222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.506265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.506306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.506346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.089 [2024-07-25 01:11:27.506389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.506434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.506478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.506523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.506569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.506616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.506662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.506705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.506749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.506790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.506835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.506876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.506916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.506953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.506995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.507034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.507078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.507111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.507149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.507184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.507223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.507260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.507295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.507333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.507370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.507406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.507444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.507484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.507522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.507563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.507601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.507641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.507677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.507717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.507763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.507805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.508315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.508380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.508422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.508466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.508512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.508558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.508601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.508646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.508688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.508730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.508776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.508821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.508865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.508912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.508955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.508997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.509037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.509085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.509123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.509163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.509200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.509248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.509286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.509322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.509356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.509393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.509436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.509482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.509522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.509563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.509601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.509644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.509688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.509730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.509766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.509800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.509839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.509879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.509918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.509959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.509998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.510037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.510079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.510120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.510163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.510206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.510247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.510288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.510331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.510374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.510416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.510465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.510505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.510547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.510593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.510632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.090 [2024-07-25 01:11:27.510673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.510722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.510766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.510810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.510851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.510892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.510931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.510975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.511470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.511513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.511550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.511586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.511627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.511669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.511707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.511748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.511788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.511832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.511869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.511918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.511959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.512002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.512049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.512096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.512138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.512189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.512229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.512278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.512329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.512372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.512417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.512467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.512511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.512554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.512597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.512640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.512686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.512729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.512774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.512816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.512865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.512908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.512952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.512994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.513052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.513101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.513148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.513190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.513221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.513260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.513302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.513337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.513375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.513414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.513453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.513495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.513541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.513583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.513621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.513656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.513699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.513728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.513767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.513804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.513840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.513877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.513920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.513967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.514004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.514050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.514090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.514666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.514715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.514755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.514802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.091 [2024-07-25 01:11:27.514842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.514885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.514936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.514982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.515026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.515078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.515124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.515165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.515210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.515257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.515300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.515348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.515393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.515438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.515481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.515524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.515567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.515614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.515660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.515704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.515754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.515798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.515842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.515895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.515935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.515982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.516024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.516072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.516122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.516169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.516216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.516267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.516311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.516357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.516396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.516436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.516476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.516516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.516559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.516601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.516640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.516679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.516719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.516753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.516792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.516828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.516868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.516908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.516950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.516993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.517030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.517072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.517113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.517151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.517193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.517235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.517264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.517299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.517333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.517368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.517899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.517946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.517995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.518039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.518087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.518127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.518167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.518196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.518238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.518278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.518314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.518352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.518389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.518431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.518473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.518509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.518549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.518587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.518628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.518666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.518696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.518730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.518758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.518786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.518814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.518849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.518887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.518922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.518951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.518979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.519009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.092 [2024-07-25 01:11:27.519036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.519068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.519097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.519126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.519155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.519192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.519236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.519278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.519314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.519352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.519399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.519437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.519479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.519519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.519564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.519605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.519651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.519693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.519736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.519780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.519829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.519869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.519914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.519955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.519995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.520031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.520078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.520122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.520158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.520198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.520235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.520275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.520760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.520805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.520847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.520896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.520938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.520983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.521032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.521081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.521121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.521165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.521216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.521265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.521308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.521351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.521393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.521434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.521477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.521518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.521560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.521608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.521657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.521700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.521748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.521793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.521837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.521884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.521924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.521969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.522017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.522062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.522106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.522148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.522191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.522232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.522274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.522318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.522362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.522408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.522456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.522500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.522542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.522587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.522629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.522671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.522717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.522756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.522801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.522848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.522892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.522935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.522975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.523014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.523061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.523104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.523143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.523180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.523221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.523258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.523296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.523326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.523363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.523402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.523440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.523481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.523947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.093 [2024-07-25 01:11:27.523987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.524027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.524067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.524112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.524160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.524199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.524237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.524275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.524306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.524346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.524382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.524418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.524464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.524502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.524541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.524578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.524615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.524655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.524697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.524738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.524783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.524827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.524871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.524918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.524962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.525005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.525049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.525087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.525126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.525166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.525195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.525236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.525275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.525316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.525351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.525386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.525426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.525463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.525501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.525540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.525580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.525620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.525654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.525685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.525727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.525756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.525783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.525812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.525848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.525890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.525927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.525957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.525987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.526024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.526057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.526084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.526112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.526141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.526169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.526196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.526225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.526254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.526773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.526824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.526866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.526910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.526956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.526999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.527041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.527091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.527133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.527184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.527228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.527271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.527316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.527370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.527411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.527459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.527508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.527551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.527595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.527641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.527686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:05.094 [2024-07-25 01:11:27.527726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.527773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.527812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.527849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.527886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.527925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.527968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.528014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.528058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.528098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.528129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.094 [2024-07-25 01:11:27.528168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.528208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.528251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.528291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.528330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.528370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.528410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.528451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.528486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.528527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.528566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.528602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.528640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.528687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.528724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.528765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.528805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.528843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.528888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.528930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.528973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.529017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.529063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.529109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.529153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.529194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.529241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.529284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.529329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.529369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.529413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.529462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.529956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.530000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.530050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.530096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.530138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.530180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.530223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.530261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.530302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.530354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.530396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.530441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.530487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.530527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.530566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.530617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.530663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.530699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.530741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.530778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.530821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.530862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.095 [2024-07-25 01:11:27.530899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.374 [2024-07-25 01:11:27.530948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.374 [2024-07-25 01:11:27.530987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.374 [2024-07-25 01:11:27.531028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.374 [2024-07-25 01:11:27.531062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.374 [2024-07-25 01:11:27.531102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.374 [2024-07-25 01:11:27.531140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.374 [2024-07-25 01:11:27.531179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.374 [2024-07-25 01:11:27.531224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.374 [2024-07-25 01:11:27.531268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.374 [2024-07-25 01:11:27.531309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.374 [2024-07-25 01:11:27.531345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.374 [2024-07-25 01:11:27.531383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.374 [2024-07-25 01:11:27.531422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.374 [2024-07-25 01:11:27.531466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.531505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.531537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.531576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.531612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.531646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.531683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.531719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.531758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.531796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.531835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.531873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.531910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.531951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.531993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.532030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.532076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.532122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.532166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.532209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.532254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.532296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.532342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.532380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.532423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.532454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.532494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.532998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.533046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.533087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.533128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.533170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.533207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.533247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.533285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.533324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.533363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.533404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.533443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.533487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.533529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.533578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.533620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.533663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.533707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.533752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.533799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.533847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.533888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.533931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.533973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.534017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.534065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.534109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.534151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.534201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.534245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.534290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.534335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.534376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.534424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.534464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.534506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.534554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.534595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.534638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.534698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.534742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.534782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.534827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.534873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.534928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.534969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.535013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.535063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.535108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.535151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.535194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.535239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.535277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.535312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.535350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.535394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.535442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.535483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.535523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.535563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.535594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.535631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.535674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.535711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.536186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.536227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.536269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.536308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.536350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.536388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.536423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.536462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.536506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.536541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.536579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.536627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.536673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.536716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.536759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.536802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.536846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.536896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.536936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.536980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.537023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.537070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.537115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.537161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.537202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.537242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.537292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.537337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.537383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.537430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.537473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.537515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.537561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.537607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.537651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.537694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.537735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.537777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.537823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.537862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.537903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.537941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.537975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.538012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.538053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.538090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.538131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.538166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.538202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.538241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.538281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.538327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.538371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.538414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.538451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.538491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.538523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.538561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.538598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.538637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.538675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.538709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.538746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.375 [2024-07-25 01:11:27.539301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.539349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.539398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.539441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.539487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.539537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.539576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.539621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.539671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.539711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.539757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.539803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.539844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.539887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.539929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.539974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.540018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.540071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.540119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.540160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.540211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.540252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.540292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.540340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.540383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.540423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.540463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.540501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.540544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.540580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.540612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.540651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.540690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.540729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.540768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.540805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.540846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.540885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.540932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.540981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.541027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.541069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.541106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.541142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.541183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.541219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.541256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.541292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.541330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.541371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.541413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.541452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.541492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.541535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.541573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.541612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.541655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.541700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.541744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.541786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.541837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.541881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.541925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.541971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.542458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.542509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.542552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.542594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.542643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.542686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.542727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.542757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.542798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.542836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.542870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.542909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.542949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.542987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.543026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.543070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.543115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.543162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.543203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.543241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.543271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.543309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.543349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.543379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.543425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.543462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.543498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.543541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.543582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.543621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.543657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.543698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.543738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.543771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.543807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.543849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.543887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.543932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.543978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.544023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.544071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.544116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.544160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.544202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.544249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.544299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.544338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.544380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.544426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.544465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.544511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.544553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.544595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.544641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.544682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.544729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.544776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.544821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.544863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.544903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.544943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.544985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.545028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.545521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.545563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.545604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.545640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.545681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.545736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.545782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.545826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.545867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.545911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.545953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.546007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.546054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.546097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.546148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.546194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.546239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.546291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.546334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.546379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.546425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.546466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.546506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.546558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.546598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.546644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.546690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.376 [2024-07-25 01:11:27.546736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.546778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.546825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.546868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.546913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.546960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.547002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.547046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.547089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.547133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.547179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.547225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.547268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.547309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.547351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.547394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.547432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.547465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.547499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.547536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.547572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.547610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.547649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.547687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.547727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.547764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.547802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.547840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.547877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.547922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.547965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.547995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.548031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.548070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.548111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.548150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.548188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.548734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.548782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.548823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.548872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.548916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.548960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.549007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.549053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.549096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.549143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.549185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.549226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.549278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.549323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.549365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.549408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.549451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.549495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.549542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.549583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.549623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.549661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.549694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.549734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.549771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.549809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.549856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.549898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.549942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.549981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.550017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.550059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.550098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.550137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.550168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.550205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.550246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.550284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.550323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.550360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.550401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.550443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.550481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.550523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.550562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.550597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.550632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.550679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.550723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.550769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.550820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.550862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.550903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.550950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.550991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.551034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.551082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.551127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.551170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.551214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.551262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.551308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.551352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.551848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.551897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.551939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.551978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.552010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.552051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.552094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.552131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.552169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.552204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.552241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.552278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.552320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.552362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.552399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.552436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.552474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.552511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.552542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.552586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.552624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.552665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.552703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.552738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.552776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.552817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.552854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.552893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.552931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.552971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.553013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.553056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.553093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.553133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.553174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.553219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.553260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.553301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.377 [2024-07-25 01:11:27.553346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.553388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.553428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.553477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.553518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.553563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.553610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.553655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.553695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.553742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.553787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.553830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.553874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.553924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.553971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.554013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.554070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.554113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.554159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.554200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.554243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.554294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.554336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.554380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.554429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.554473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.554937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.554981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.555019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.555060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.555097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.555135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.555178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.555209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.555246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.555284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.555324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.555363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.555399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.555440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.555476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.555514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.555552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.555590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.555628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.555666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.555702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.555744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.555789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.555836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.555878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.555925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.555971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.556021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.556065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.556111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.556164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.556207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.556239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.556281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.556321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.556359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.556400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.556438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.556474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.556514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.556553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.556586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.556626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.556663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.556703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.556747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.556793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.556838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.556884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.556931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.556973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.557015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.557060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.557102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.557142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.557190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.557227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.557269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.557310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.557346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.557376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.557416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.557461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.558003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.558057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.558111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.558156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.558198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.558255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.558295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.558338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.558378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.558422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.558463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.558508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.558564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.558604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.558646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.558691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.558735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.558779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.558828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.558870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.558917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.558960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.559005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.559055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.559099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.559139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.559195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.559241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.559283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.559330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.559372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.559415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.559461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.559506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.559547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.559594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.559631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.559671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.559712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.559752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.559795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.559826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.559862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.559899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.559938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.559978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.560020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.560061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.560103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.560141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.560187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.560229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.560267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.560305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.560336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.560374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.560411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.560448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.560488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.560528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.560565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.560603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.560639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.560677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.561164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.561210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.561259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.561298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.378 [2024-07-25 01:11:27.561345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.561387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.561433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.561476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.561521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.561565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.561609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.561654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.561693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.561736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.561782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.561821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.561868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.561908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.561950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.561995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.562032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.562080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.562126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.562170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.562216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.562258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.562301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.562339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.562369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.562413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.562454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.562495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.562537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.562577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.562615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.562655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.562700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.562745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.562785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.562818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.562858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.562888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.562925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.562962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.563004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.563048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.563085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.563125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.563166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.563205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.563242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.563280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.563318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.563362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.563401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.563438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.563478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.563518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.563560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.563607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.563649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.563692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.563745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.564249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.564293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.564342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.564387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.564431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.564476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.564521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.564560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.564606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.564641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.564675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.564712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.564751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.564791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.564833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.564869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.564905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.564940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.564974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.565013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.565059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.565099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.565133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.565167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.565203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.565245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.565280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.565323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.565363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.565402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.565440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.565480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.565519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.565559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.565596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.565624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.565661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.565701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.565737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.565794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.565830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.565872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.565920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.565967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.566011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.566063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.566107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.566153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.566204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.566246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.566288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.566334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.566375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.566417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.566459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.566509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.566554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.566597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.566648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.566690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.566732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.566785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.566833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.566880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.567349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.567392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.567433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.567470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.567511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.567550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.567589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.567627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.567668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.567712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.567753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.567796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.567826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.567870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.567908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.567947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.567985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.568026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.568070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.568110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.568153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.568193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.568232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.568271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.568306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.568347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.568391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.568437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.568479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.568524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.568572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.568618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.568660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.568710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.568756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.568801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.568845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.568888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.568932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.568978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.569020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.569074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.569120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.569164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.569208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.569253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.569300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.379 [2024-07-25 01:11:27.569341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.569385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.569428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.569473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.569516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.569556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.569600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.569641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.569692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.569734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.569780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.569826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.569865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.569904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.569947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.569993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.570504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.570542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.570581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.570623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.570659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.570697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.570732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.570770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.570812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.570859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.570903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.570941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.570981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.571017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.571051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.571092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.571130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.571177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.571222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.571266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.571308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.571347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.571384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.571421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.571453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.571492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.571530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.571572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.571612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.571654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.571691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.571734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.571776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.571813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.571855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.571898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.571942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.571984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.572034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.572079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.572122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.572170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.572213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.572260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.572309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.572352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.572396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.572441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.572484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.572531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.572573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.572606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.572642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.572680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.572718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.572756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.572795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.572832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.572869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.572915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.572944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.572981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.573020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.573065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.573571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.573613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.573653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.573691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.573728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.573775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.573816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.573861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.573908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.573948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.573994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.574049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.574095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.574136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.574190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.574231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.574274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.574324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.574365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.574410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.574460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.574501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.574542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.574591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.574634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.574685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.574727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.574768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.574811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.574856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.574901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.574951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.574996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.575037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.575078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.575118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.575155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.575195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.575240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.575282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.575312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.575350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.575390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.575429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.575468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.575508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.575547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.575585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.575622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.575660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.575695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.575731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.575773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.575810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.575850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.575891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.575933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.575971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.576010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.380 [2024-07-25 01:11:27.576050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.576089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.576128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.576166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.576664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.576713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.576756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.576798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.576854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.576897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.576940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.576980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.577027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.577077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.577126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.577172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.577214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:05.381 [2024-07-25 01:11:27.577256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.577304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.577346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.577394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.577435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.577483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.577529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.577574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.577617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.577659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.577703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.577745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.577796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.577836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.577880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.577926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.577972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.578015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.578063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.578105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.578150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.578195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.578237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.578286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.578328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.578373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.578411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.578449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.578487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.578529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.578568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.578602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.578635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.578672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.578712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.578750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.578786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.578825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.578862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.578898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.578942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.578990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.579030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.579077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.579112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.579147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.579183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.579221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.579261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.579300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.579344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.579851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.579894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.579933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.579976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.580024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.580076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.580118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.580164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.580205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.580250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.580294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.580337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.580383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.580427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.580469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.580516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.580557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.580587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.580627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.580663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.580698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.580742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.580788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.580830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.580874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.580911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.580953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.580990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.581024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.581067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.581108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.581144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.581173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.581213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.581259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.581294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.581331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.581361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.581389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.581418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.581451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.581479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.581517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.581555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.581594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.581633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.581675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.581712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.581761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.581802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.581844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.581888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.581931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.581975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.582018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.582067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.582111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.582157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.582198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.582241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.582294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.582333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.582378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.582879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.582926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.582970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.583016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.583059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.583098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.583143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.583181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.583220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.583259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.583298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.583337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.583372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.583403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.583442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.583478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.583513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.583550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.583597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.583638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.583678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.583716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.583751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.583791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.583828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.583864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.583906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.583945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.583985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.584025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.584067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.584104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.584153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.584194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.381 [2024-07-25 01:11:27.584241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.584285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.584327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.584371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.584417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.584460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.584504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.584544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.584588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.584633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.584677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.584724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.584767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.584811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.584857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.584901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.584945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.584993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.585047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.585090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.585133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.585180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.585225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.585270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.585313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.585358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.585401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.585446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.585489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.585532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.585993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.586036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.586076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.586115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.586150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.586188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.586223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.586261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.586299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.586339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.586381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.586420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.586454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.586483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.586520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.586553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.586591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.586630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.586666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.586707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.586749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.586786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.586827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.586861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.586900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.586937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.586979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.587022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.587069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.587111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.587141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.587175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.587215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.587254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.587293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.587331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.587367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.587407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.587453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.587497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.587547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.587588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.587631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.587676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.587718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.587759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.587798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.587834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.587870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.587905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.587937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.587974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.588011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.588049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.588093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.588133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.588166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.588199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.588236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.588263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.588295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.588337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.588382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.588911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.588957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.589005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.589055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.589098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.589147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.589187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.589230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.589282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.589325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.589373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.589424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.589466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.589509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.589555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.589602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.589655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.589699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.589741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.589787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.589829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.589870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.589916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.589961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.590004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.590052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.590094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.590142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.590191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.590232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.590280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.590331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.590381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.590425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.590472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.590519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.590563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.590607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.590653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.590698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.590739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.590779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.590818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.590856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.590891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.590927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.590968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.591010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.591046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.591088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.591123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.591169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.591220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.591267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.591311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.591351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.591390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.591437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.591486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.591533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.591564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.382 [2024-07-25 01:11:27.591603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.591643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.591682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.592255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.592293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.592335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.592378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.592420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.592464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.592507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.592547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.592596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.592641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.592682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.592722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.592764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.592810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.592853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.592896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.592943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.592991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.593032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.593080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.593126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.593169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.593211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.593254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.593296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.593336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.593378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.593422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.593456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.593495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.593537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.593580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.593618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.593657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.593695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.593732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.593769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.593809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.593856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.593902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.593932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.593970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.594012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.594052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.594095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.594132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.594169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.594208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.594248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.594285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.594324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.594359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.594394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.594442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.594486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.594527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.594572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.594618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.594663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.594711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.594756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.594800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.594841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.595345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.595396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.595440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.595487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.595531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.595574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.595612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.595642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.595682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.595721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.595757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.595793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.595835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.595881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.595924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.595964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.596002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.596045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.596086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.596127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.596156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.596193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.596233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.596273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.596312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.596348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.596394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.596434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.596473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.596517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.596556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.596594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.596632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.596672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.596712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.596755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.596797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.596846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.596894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.596937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.596980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.597031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.597077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.597124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.597168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.597212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.597254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.597297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.597340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.597386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.597428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.597472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.597513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.597558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.597607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.597649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.597691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.597742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.597784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.597828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.597870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.597921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.597964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.598008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.598477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.598521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.598557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.598597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.598646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.598684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.598723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.598763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.598794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.598831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.598869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.598907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.598943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.598981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.599018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.599060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.599101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.383 [2024-07-25 01:11:27.599143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.599182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.599224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.599264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.599306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.599350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.599394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.599441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.599489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.599530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.599576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.599624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.599673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.599721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.599766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.599815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.599856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.599896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.599926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.599971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.600007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.600048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.600088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.600125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.600170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.600208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.600244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.600279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.600317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.600354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.600392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.600430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.600472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.600518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.600559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.600604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.600649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.600691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.600735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.600783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.600832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.600876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.600926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.600966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.601008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.601058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.601639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.601685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.601727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.601768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.601816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.601859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.601902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.601946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.601991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.602035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.602081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.602125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.602168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.602213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.602254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.602297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.602340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.602394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.602437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.602481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.602521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.602566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.602605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.602650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.602698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.602738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.602778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.602827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.602868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.602913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.602959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.603002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.603052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.603098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.603145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.603186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.603224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.603270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.603313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.603350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.603394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.603429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.603468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.603509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.603549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.603586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.603622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.603663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.603700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.603738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.603781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.603823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.603862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.603901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.603932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.603968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.604004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.604037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.604078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.604118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.604158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.604194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.604233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.604276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.604770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.604818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.604864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.604907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.604957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.605002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.605047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.605088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.605137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.605182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.605224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.605272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.605311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.605355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.605397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.605441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.605487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.605529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.605571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.605615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.605660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.605704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.605749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.605794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.605839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.605882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.605919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.605958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.605999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.606036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.606071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.606113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.606154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.606195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.606231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.606267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.606305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.606345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.606391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.606429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.606467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.606502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.606542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.606573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.606615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.606654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.606693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.606731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.606764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.606799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.606840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.606879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.606921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.606965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.607003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.384 [2024-07-25 01:11:27.607046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.607087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.607123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.607159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.607199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.607245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.607287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.607334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.607821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.607872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.607916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.607958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.608006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.608061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.608103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.608141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.608182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.608212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.608253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.608289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.608327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.608375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.608422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.608460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.608499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.608540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.608588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.608625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.608669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.608705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.608740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.608779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.608816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.608857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.608894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.608932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.608969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.609010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.609053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.609092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.609134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.609175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.609224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.609277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.609318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.609364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.609411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.609454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.609498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.609548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.609592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.609632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.609681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.609733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.609775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.609819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.609865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.609911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.609957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.610006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.610056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.610100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.610147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.610191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.610235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.610278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.610321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.610366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.610411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.610455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.610503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.610545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.611036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.611084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.611123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.611158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.611196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.611238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.611281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.611319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.611352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.611389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.611433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.611474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.611516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.611557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.611599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.611640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.611682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.611722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.611766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.611819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.611862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.611905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.611948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.611991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.612037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.612088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.612130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.612172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.612221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.612262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.612306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.612351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.612399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.612441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.612487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.612532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.612577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.612621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.612659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.612698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.612740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.612784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.612825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.612857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.612894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.612935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.612972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.613011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.613050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.613087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.613125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.613162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.613199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.613238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.613277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.613318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.613358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.613394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.613440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.613480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.613523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.613567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.613614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.614108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.614153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.614191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.614236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.614274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.614313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.614349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.614388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.614426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.614463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.614501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.614542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.614588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.614631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.614678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.614720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.614763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.614806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.614850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.614900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.614943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.614988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.615032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.615079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.615126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.615169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.615210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.615256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.615303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.615346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.615387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.615431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.385 [2024-07-25 01:11:27.615473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.615514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.615550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.615590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.615633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.615668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.615702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.615743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.615779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.615818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.615861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.615907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.615952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.615990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.616029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.616062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.616104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.616143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.616185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.616223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.616262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.616301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.616343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.616385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.616424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.616466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.616502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.616542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.616582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.616624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.616673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.616717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.617220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.617266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.617307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.617357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.617397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.617442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.617490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.617536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.617584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.617627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.617670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.617710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.617756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.617797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.617839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.617893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.617935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.617979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.618031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.618077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.618118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.618164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.618204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.618247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.618297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.618340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.618382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.618423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.618465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.618510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.618555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.618592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.618632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.618669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.618706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.618736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.618774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.618813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.618852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.618889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.618933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.618971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.619011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.619052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.619096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.619138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.619175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.619213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.619243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.619280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.619318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.619356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.619399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.619438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.619479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.619518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.619556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.619594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.619635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.619671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.619708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.619746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.619786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.620306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.620356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.620405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.620455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.620498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.620540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.620586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.620628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.620672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.620719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.620770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.620811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.620857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.620896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.620937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.620980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.621021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.621057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.621100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.621140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.621176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.621218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.621263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.621307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.621349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.621389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.621422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.621463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.621499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.621538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.621573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.621609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.621654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.621690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.621729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.386 [2024-07-25 01:11:27.621759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.621794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.621834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.621871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.621917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.621957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.621994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.622034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.622075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.622120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.622162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.622206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.622251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.622295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.622337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.622386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.622431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.622473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.622521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.622569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.622612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.622656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.622712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.622757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.622805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.622850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.622896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.622940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.622988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.623460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.623503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.623545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.623587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.623624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.623662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.623742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.623783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.623820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.623891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.623930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.623971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.624015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.624063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.624107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.624154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.624196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.624241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.624287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.624329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.624374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.624420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.624462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.624508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.624556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.624601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.624643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.624692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.624735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.624778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.624824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.624864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.624909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.624962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.625006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.625056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.625104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.625147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.625191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.625238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.625282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.625326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.625374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.625415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.625462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.625506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.625549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.625590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.625634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.625671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.625707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.625745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.625783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.625822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.625852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.625890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.625927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.625964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.626001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.626036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.626080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.626118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.626158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.626636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.626678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.626722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.626762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.626801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.626836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.626875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.626918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 Message suppressed 999 times: [2024-07-25 01:11:27.626959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 Read completed with error (sct=0, sc=15) 00:11:05.387 [2024-07-25 01:11:27.627002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.627049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.627092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.627138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.627184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.627228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.627272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.627316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.627361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.627404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.627447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.627488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.627531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.627573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.627619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.627675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.627719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.627766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.627797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.627836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.627876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.627913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.627951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.627989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.628030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.628075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.628115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.628154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.628194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.628231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.628265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.628298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.628336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.628372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.628408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.628442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.628477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.628522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.628561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.628600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.628639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.628677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.628716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.628760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.628802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.628848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.628894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.628940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.628984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.629025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.629072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.629115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.629159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.629203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.629248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.387 [2024-07-25 01:11:27.629727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.629777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.629825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.629871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.629912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.629958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.630003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.630053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.630098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.630144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.630187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.630230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.630273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.630314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.630357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.630401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.630438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.630467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.630504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.630543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.630577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.630616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.630655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.630696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.630732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.630770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.630807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.630844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.630889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.630935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.630973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.631007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.631053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.631091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.631131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.631169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.631208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.631242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.631280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.631320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.631358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.631399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.631444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.631483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.631523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.631563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.631599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.631651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.631698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.631742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.631788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.631837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.631880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.631928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.631985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.632030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.632075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.632126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.632169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.632213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.632264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.632310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.632354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.632841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.632887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.632926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.632964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.633003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.633051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.633093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.633129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.633167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.633210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.633254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.633285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.633323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.633360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.633398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.633439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.633477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.633519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.633560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.633601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.633639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.633679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.633720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.633750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.633782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.633825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.633864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.633903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.633945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.633990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.634029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.634077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.634121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.634166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.634213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.634254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.634296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.634339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.634379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.634417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.634458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.634503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.634539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.634579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.634618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.634655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.634699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.634738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.634776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.634813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.634853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.634894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.634929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.634973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.635021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.635067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.635110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.635152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.635201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.635241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.635289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.635331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.635375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.635422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.635910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.635958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.636001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.636051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.636098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.636145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.636192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.636237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.636282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.636332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.636372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.636412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.636460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.636503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.636544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.636596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.636641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.388 [2024-07-25 01:11:27.636688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.636743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.636786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.636830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.636874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.636916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.636961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.637008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.637054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.637099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.637139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.637179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.637225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.637265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.637304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.637343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.637372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.637409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.637450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.637496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.637538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.637577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.637614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.637652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.637691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.637731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.637772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.637808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.637847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.637878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.637920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.637958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.637996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.638034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.638083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.638123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.638161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.638201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.638239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.638284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.638322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.638353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.638390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.638425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.638463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.638505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.639016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.639069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.639114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.639157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.639204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.639250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.639297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.639345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.639391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.639436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.639481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.639523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.639566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.639600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.639637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.639679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.639718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.639758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.639796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.639837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.639876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.639926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.639964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.640005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.640048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.640080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.640118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.640152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.640187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.640222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.640260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.640303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.640338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.640374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.640413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.640454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.640493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.640539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.640584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.640624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.640663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.640708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.640754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.640798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.640845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.640890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.640935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.640980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.641030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.641077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.641119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.641163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.641198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.641240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.641282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.641321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.641361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.641398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.641434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.641473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.641514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.641549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.641592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.641634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.642134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.642182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.642225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.642271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.642319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.642359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.642400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.642445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.642486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.642534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.642573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.642616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.642660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.642703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.642746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.642790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.642832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.642878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.642922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.642964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.643011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.643053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.643099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.643148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.643185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.643232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.643280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.643324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.643362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.643403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.643441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.643481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.643521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.643564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.643593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.643632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.643671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.643714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.643763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.643805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.643848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.643889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.643927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.643963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.644000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.644041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.644086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.644116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.389 [2024-07-25 01:11:27.644153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.644191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.644227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.644266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.644304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.644341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.644379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.644417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.644458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.644497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.644534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.644569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.644608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.644646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.644685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.645181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.645230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.645275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.645321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.645365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.645409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.645459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.645501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.645542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.645586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.645631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.645673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.645718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.645761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.645804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.645852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.645893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.645929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.645964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.646003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.646045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.646082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.646121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.646160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.646199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.646241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.646282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.646321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.646363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.646409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.646449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.646486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.646523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.646553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.646592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.646626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.646666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.646704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.646744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.646782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.646818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.646862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.646893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.646934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.646976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.647012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.647052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.647094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.647132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.647168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.647206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.647251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.647292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.647335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.647381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.647425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.647472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.647515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.647564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.647610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.647653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.647707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.647752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.647796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.648285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.648326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.648369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.648411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.648447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.648485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.648521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.648558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.648597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.648630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.648674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.648711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.648750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.648793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.648830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.648868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.648904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.648944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.648981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.649032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.649082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.649127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.649174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.649218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.649260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.649305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.649361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.649407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.649452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.649495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.649539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.649584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.649627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.649675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.649716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.649763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.649806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.649849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.649893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.649939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.649984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.650026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.650084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.650129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.650173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.650218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.650260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.650304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.650343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.650381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.650417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.650453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.650491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.650526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.650561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.650599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.650634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.650671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.650707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.650744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.650789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.650832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.650880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.651490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.651539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.651572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.651607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.651647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.651693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.651736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.651780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.651823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.651866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.390 [2024-07-25 01:11:27.651913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.651957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.652001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.652049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.652089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.652134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.652180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.652223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.652269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.652315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.652359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.652401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.652445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.652485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.652526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.652569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.652611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.652660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.652702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.652747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.652791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.652831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.652866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.652904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.652941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.652980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.653017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.653060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.653105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.653141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.653180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.653218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.653258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.653301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.653344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.653390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.653425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.653461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.653500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.653536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.653578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.653615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.653655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.653695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.653734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.653777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.653815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.653843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.653875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.653918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.653953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.653993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.654034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.654077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.654570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.654618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.654664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.654709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.654754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.654796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.654837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.654885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.654928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.654975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.655017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.655071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.655116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.655156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.655201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.655241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.655284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.655327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.655368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.655414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.655456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.655496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.655535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.655574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.655612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.655650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.655687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.655729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.655776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.655816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.655846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.655886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.655920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.655961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.655999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.656039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.656082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.656118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.656162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.656191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.656228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.656265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.656304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.656341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.656382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.656421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.656459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.656497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.656536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.656573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.656611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.656650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.656691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.656733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.656776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.656817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.656862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.656910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.656949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.656993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.657046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.657087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.657132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.657636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.657681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.657718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.657759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.657798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.657836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.657875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.657915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.657950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.657988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.658025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.658078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.658117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.658153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.658186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.658217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.658252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.658297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.658345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.658380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.658419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.391 [2024-07-25 01:11:27.658457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.658498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.658539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.658580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 true 00:11:05.392 [2024-07-25 01:11:27.658620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.658660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.658700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.658741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.658779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.658817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.658858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.658892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.658949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.658993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.659039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.659100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.659144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.659188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.659230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.659273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.659314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.659357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.659396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.659435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.659473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.659507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.659546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.659586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.659628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.659667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.659712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.659747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.659783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.659820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.659866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.659909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.659951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.660004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.660047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.660091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.660135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.660179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.660221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.660723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.660769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.660816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.660861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.660904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.660946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.660992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.661039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.661088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.661131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.661169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.661211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.661251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.661287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.661332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.661374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.661415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.661452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.661491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.661523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.661562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.661597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.661637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.661679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.661719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.661760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.661798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.661840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.661877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.661915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.661945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.661988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.662027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.662066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.662104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.662145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.662182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.662219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.662260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.662305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.662342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.662378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.662413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.662459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.662502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.662546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.662590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.662633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.662676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.662716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.662759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.662805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.662846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.662889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.662930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.662975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.663024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.663072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.663122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.663166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.663210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.663253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.663293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.663791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.663843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.663885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.663932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.663981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.664027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.664073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.664113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.664143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.664183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.664222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.664258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.664300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.664338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.664376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.664412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.664449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.664486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.664524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.664564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.664599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.664634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.664664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.664705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.664740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.664778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.664817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.664855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.664899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.664939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.664975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.665016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.665057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.665094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.665132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.665170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.665207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.665246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.665284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.665325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.665366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.665409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.665456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.665497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.665542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.665586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.665629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.665671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.665713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.665754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.665801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.665840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.665883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.665927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.665970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.392 [2024-07-25 01:11:27.666013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.666063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.666105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.666147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.666194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.666236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.666277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.666325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.666368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.666827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.666880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.666923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.666965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.667002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.667039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.667084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.667124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.667162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.667200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.667237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.667274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.667312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.667353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.667392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.667431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.667471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.667507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.667545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.667587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.667629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.667675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.667721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.667763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.667806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.667850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.667892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.667934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.667974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.668016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.668074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.668118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.668162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.668208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.668252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.668293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.668336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.668378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.668423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.668473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.668516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.668561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.668599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.668633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.668673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.668711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.668747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.668783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.668818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.668855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.668900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.668944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.668981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.669018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.669060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.669100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.669130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.669170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.669208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.669247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.669285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.669324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.669366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.669899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.669948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.669993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.670037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.670087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.670129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.670172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.670215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.670257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.670302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.670348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.670391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.670434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.670476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.670521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.670565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.670613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.670657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.670698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.670743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.670786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.670830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.670872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.670916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.670957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.671004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.671048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.671097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.671136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.671175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.671222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.671265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.671308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.671350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.671390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.671433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.671474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.671512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.671545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.671577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.671613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.671652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.671694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.671740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.671786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.671833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.671873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.671911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.671949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.671985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.672026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.672063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.672101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.672136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.672184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.672233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.672279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.672316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.672352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.672391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.672431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.672470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.672505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.672546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.673039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.673092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.673135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.673178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.673226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.673267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.673311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.673361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.673404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.673449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.673493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.673532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.673562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.673601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.673641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.673680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.673724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.673771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.393 [2024-07-25 01:11:27.673816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.673855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.673895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.673934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.673973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.674004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.674049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.674086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.674123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.674160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.674198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.674235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.674274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.674314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.674353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.674391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.674428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.674466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.674506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.674546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.674587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.674632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.674674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.674719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.674764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.674811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.674851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.674899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.674940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.674985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.675030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.675080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.675124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.675175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.675215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.675255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.675304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.675346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.675391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.675439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.675483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.675525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.675579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.675620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.675664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:05.394 [2024-07-25 01:11:27.676159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.676205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.676244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.676282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.676325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.676369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.676408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.676447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.676486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.676523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.676553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.676588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.676626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.676663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.676699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.676736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.676780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.676810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.676847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.676887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.676930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.676969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.677007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.677053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.677091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.677127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.677166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.677206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.677250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.677295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.677339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.677384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.677425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.677469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.677515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.677561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.677603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.677646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.677686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.677724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.677761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.677804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.677845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.677878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.677916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.677955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.677988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.678024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.678068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.678109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.678148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.678187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.678230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.678272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.678315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.678362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.678404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.678443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.678495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.678539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.678580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.678628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.678670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.678711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.679207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.679250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.679286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.679326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.679366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.679406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.679445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.679484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.679521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.679558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.679594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.679631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.679670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.679709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.679752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.679789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.679827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.679870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.679912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.679952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.679991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.680030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.680065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.680107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.680151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.680197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.680242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.680282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.680327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.680370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.680411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.680455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.680504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.680546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.680588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.394 [2024-07-25 01:11:27.680634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.680676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.680721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.680765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.680809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.680850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.680893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.680941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.680989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.681029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.681077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.681126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.681169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.681213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.681254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.681296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.681338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.681376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.681417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.681458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.681499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.681537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.681578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.681616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.681654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.681695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.681732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.681767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.682254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.682302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.682345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.682395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.682435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.682481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.682519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.682565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.682607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.682647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.682699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.682740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.682787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.682828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.682875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.682919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.682966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.683008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.683056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.683100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.683142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.683180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.683225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 01:11:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 789518 00:11:05.395 [2024-07-25 01:11:27.683268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.683311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.683352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.683389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 01:11:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:05.395 [2024-07-25 01:11:27.683421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.683459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.683496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.683536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.683572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.683612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.683645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.683689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.683729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.683767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.683809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.683846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.683884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.683921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.683956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.684001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.684049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.684093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.684147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.684193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.684239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.684279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.684321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.684369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.684412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.684459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.684513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.684559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.684602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.684643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.684684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.684723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.684756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.684795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.684833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.684872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.684912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.685454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.685504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.685546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.685590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.685632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.685676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.685721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.685767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.685814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.685854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.685898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.685947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.685989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.686032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.686075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.686114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.686156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.686194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.686235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.686265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.686307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.686344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.686381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.686420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.686458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.686494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.686535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.686577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.686618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.686660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.686702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.686738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.686775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.686818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.686865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.686908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.686950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.686998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.687045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.687091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.687135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.687174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.687214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.687264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.687305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.687346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.687396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.687436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.687483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.687527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.687568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.687612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.687659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.687701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.687752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.687794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.687840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.687881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.687927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.687971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.688015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.688060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.395 [2024-07-25 01:11:27.688104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.688581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.688622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.688662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.688701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.688737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.688779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.688809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.688849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.688888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.688928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.688972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.689013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.689056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.689091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.689130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.689166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.689208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.689247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.689291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.689336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.689376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.689418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.689460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.689508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.689550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.689595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.689643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.689686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.689727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.689778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.689818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.689860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.689906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.689946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.689989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.690033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.690080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.690121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.690165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.690208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.690238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.690281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.690317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.690354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.690391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.690429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.690466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.690505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.690548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.690590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.690624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.690658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.690708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.690749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.690791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.690832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.690874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.690917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.690963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.691004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.691050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.691098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.691146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.691196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.691698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.691736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.691779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.691818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.691856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.691894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.691930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.691969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.692007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.692062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.692104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.692141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.692180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.692217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.692256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.692301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.692343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.692386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.692439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.692482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.692526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.692578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.692623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.692670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.692712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.692760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.692801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.692844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.692897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.692939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.692982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.693030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.693084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.693136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.693187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.693229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.693277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.693327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.693371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.693417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.693460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.693501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.693548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.693593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.693630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.693667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.693706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.693746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.693776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.693818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.693854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.693891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.693930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.693971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.694011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.694052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.694095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.694135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.694173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.694210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.694239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.694280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.694319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.694855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.694909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.694951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.694994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.695039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.695090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.695137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.695178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.695221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.695262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.695306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.695347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.695389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.695433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.695477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.695519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.695561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.695605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.695649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.695693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.695734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.695777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.695825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.695867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.695911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.695957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.696000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:05.396 [2024-07-25 01:11:27.696049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:06.336 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:06.336 01:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:06.336 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:06.336 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:06.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:06.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:06.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:06.595 01:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:11:06.595 01:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:11:06.855 true 00:11:06.855 01:11:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 789518 00:11:06.855 01:11:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.795 01:11:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:07.795 01:11:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:11:07.795 01:11:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:11:08.055 true 00:11:08.055 01:11:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 789518 00:11:08.055 01:11:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:08.055 01:11:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:08.315 01:11:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:11:08.315 01:11:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:11:08.575 true 00:11:08.575 01:11:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 789518 00:11:08.575 01:11:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:09.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:09.956 01:11:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:09.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:09.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:09.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:09.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:09.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:09.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:09.956 01:11:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:11:09.956 01:11:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:11:10.216 true 00:11:10.216 01:11:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 789518 00:11:10.216 01:11:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:11.155 01:11:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:11.155 01:11:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:11:11.155 01:11:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:11:11.415 true 00:11:11.416 01:11:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 789518 00:11:11.416 01:11:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:11.416 01:11:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:11.675 01:11:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:11:11.675 01:11:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:11:11.935 true 00:11:11.935 01:11:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 789518 00:11:11.935 01:11:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:13.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:13.317 01:11:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:13.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:13.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:13.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:13.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:13.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:13.317 01:11:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:11:13.317 01:11:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:11:13.317 true 00:11:13.577 01:11:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 789518 00:11:13.577 01:11:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.147 01:11:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:14.147 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:14.407 01:11:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:11:14.407 01:11:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:11:14.667 true 00:11:14.667 01:11:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 789518 00:11:14.667 01:11:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.942 01:11:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:14.942 01:11:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:11:14.942 01:11:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:11:15.202 true 00:11:15.202 01:11:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 789518 00:11:15.202 01:11:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:15.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:15.467 01:11:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:15.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:15.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:15.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:15.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:15.467 [2024-07-25 01:11:37.917068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.917133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.917180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.917224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.917264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.917306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.917350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.917394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.917437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.917475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.917515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.917555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.917599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.917638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.917677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.917720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.917760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.917798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.917832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.917869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.917908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.917959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.918001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.918047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.918087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.918125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.918162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.918198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.918248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.918284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.918324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.918360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.918396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.918435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.918468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.918496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.918532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.918567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.918602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.918633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.918671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.918709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.918746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.918784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.918823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.918860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.918897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.467 [2024-07-25 01:11:37.918935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.918972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.919009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.919053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.919116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.919176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.919214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.919255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.919306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.919349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.919394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.919435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.919476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.919524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.919564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.919604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.920174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.920211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.920248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.920283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.920320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.920355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.920393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.920423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.920461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.920502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.920545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.920577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.920605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.920636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.920671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.920710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.920753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.920798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.920836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.920878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.920915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.920955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.920998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.921054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.921097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.921139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.921182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.921222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.921278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.921318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.921364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.921403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.921443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.921479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.921518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.921558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.921596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.921631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.921667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.921699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.921741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.921783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.921824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.921869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.921912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.921953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.921998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.922040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.922087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.922131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.922170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.922211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.922258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.922299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.922341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.922382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.922426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.922470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.922519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.922560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.922606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.922649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.922694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.922734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.922922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.923252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.923297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.923337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.923374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.923420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.923465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.923507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.923547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.923586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.923622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.923659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.923695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.923726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.923767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.923809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.468 [2024-07-25 01:11:37.923848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.923893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.923938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.923977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.924018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.924059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.924097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.924137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.924177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.924219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.924258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.924301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.924343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.924388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.924431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.924475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.924520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.924563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.924602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.924648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.924690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.924731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.924772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.924814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.924859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.924898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.924940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.924983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.925025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.925071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.925111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.925153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.925195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.925238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.925285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.925327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.925371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.925416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.925458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.925498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.925553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.925594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.925639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.925682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.925725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.925776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.925821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.925867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.926355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.926399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.926439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.926483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.926521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.926558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.926598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.926627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.926665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.926703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.926746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.926789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.926833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.926879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.926924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.926953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.926991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.927028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.927071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.927105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.927144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.927184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.927224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.927263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.927299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.927337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.927377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.927415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.927455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.927493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.927534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.927577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.927619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.927675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.927723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.927766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.927809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.927852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.927894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.927941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.927983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.928025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.928076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.928117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.928158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.928200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.928252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.928292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.928335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.469 [2024-07-25 01:11:37.928375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.928417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.928453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.928483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.928520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.928556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.928593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.928630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.928668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.928705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.928753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.928794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.928838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.928876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.928913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.929094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.929452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.929495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.929536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.929576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.929611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.929650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 Message suppressed 999 times: [2024-07-25 01:11:37.929690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 Read completed with error (sct=0, sc=15) 00:11:15.470 [2024-07-25 01:11:37.929737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.929779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.929822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.929863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.929906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.929951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.929997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.930038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.930090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.930134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.930176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.930222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.930262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.930304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.930353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.930396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.930435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.930486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.930526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.930568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.930620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.930663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.930707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.930756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.930798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.930840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.930884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.930927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.930970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.931013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.931062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.931106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.931147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.931194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.931232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.931272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.931313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.931357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.931401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.931437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.931473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.931510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.931546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.931585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.931622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.931656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.931692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.931729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.931764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.931803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.931832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.931869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.931905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.931942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.931981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.932477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.932528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.932567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.932606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.932644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.932689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.932725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.932764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.932815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.932858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.932905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.932943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.932987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.933033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.933083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.933126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.933167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.470 [2024-07-25 01:11:37.933213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.933256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.933308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.933352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.933393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.933441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.933481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.933513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.933552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.933592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.933631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.933671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.933709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.933752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.933790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.933831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.933867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.933906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.933936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.933973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.934008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.934049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.934090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.934126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.934169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.934206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.934245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.934283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.934320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.934357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.934399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.934436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.934473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.934511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.934548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.934593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.934641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.934691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.934735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.934781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.934826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.934878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.934924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.934966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.935014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.935059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.935105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.935287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.935638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.935684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.935730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.935772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.935815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.935862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.935910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.935957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.935999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.936047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.936088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.936132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.936175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.936216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.936258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.936292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.936329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.936370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.936410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.936445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.936488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.936529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.936571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.936608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.936638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.936679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.936714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.936751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.936791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.936829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.936865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.936915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.936954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.936996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.937032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.937077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.937113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.937149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.937185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.937222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.937263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.937300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.471 [2024-07-25 01:11:37.937340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.937380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.937421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.937456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.937493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.937531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.937571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.937612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.937654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.937699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.937740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.937788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.937834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.937874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.937929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.937969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.938013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.938062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.938104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.938148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.938651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.938698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.938735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.938772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.938810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.938849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.938887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.938923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.938961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.939007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.939049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.939090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.939130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.939162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.939201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.939236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.939276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.939319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.939358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.939396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.939436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.939474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.939513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.939553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.939591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.939627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.939656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.939692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.939728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.939768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.939813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.939855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.939897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.939938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.939991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.940034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.940084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.940133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.940177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.940219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.940257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.940297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.472 [2024-07-25 01:11:37.940334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.940372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.940407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.940444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.940481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.940518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.940555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.940597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.940638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.940679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.940717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.940759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.940804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.940847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.940892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.940940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.940987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.941031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.941077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.941122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.941166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.941210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.941388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.941745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.941790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.941834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.941880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.941924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.941967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.942022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.942073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.942120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.942161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.942207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.942252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.942293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.942335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.942381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.942422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.942464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.942510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.942554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.942596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.942637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.942681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.942724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.942766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.942804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.942844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.942881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.942917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.942946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.942987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.943025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.943068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.943108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.943144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.943184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.943230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.943273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.943310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.943348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.943389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.943424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.943453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.943492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.943531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.943573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.943612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 01:11:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:11:15.473 [2024-07-25 01:11:37.943647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.943695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.943734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.943776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.943815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.943854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.943891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.943935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 01:11:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:11:15.473 [2024-07-25 01:11:37.943971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.944012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.944052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.944091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.944129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.944169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.944205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.944244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.944753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.944798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.944842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.944893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.944932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.944978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.945025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.945071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.945113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.945150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.473 [2024-07-25 01:11:37.945195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.945235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.945271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.945315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.945345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.945386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.945429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.945469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.945505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.945546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.945586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.945631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.945674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.945714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.945751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.945790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.945828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.945865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.945903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.945945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.945986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.946027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.946067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.946105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.946139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.946173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.946207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.946248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.946286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.946327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.946369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.946414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.946460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.946502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.946547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.946592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.946637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.946688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.946729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.946772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.946816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.946860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.946905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.946950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.946993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.947036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.947082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.947125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.947168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.947209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.947253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.947297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.947337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.947375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.947538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.947923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.947967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.948002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.948050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.948090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.948132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.948175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.948222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.948261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.948304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.948349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.948396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.948437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.948480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.948523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.948568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.948621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.948660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.948709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.948756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.948801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.948847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.948896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.948937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.948979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.949027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.949073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.949119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.949159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.949202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.949250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.949291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.949333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.949383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.949424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.949472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.949515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.949562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.949608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.949650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.949697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.949742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.474 [2024-07-25 01:11:37.949785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.949826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.949870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.949910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.949948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.949986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.950026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.950066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.950110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.950150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.950180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.950219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.950255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.950294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.950333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.950371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.950411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.950453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.950490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.950532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.951055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.951099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.951139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.951178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.951213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.951250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.951287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.951335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.951381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.951420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.951463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.951511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.951552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.951597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.951642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.951687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.951736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.951775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.951819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.951865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.951910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.951952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.951999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.952040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.952088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.952131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.952165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.952201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.952241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.952281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.952317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.952354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.952393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.952432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.952470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.952508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.952544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.952581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.952611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.952651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.952692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.952732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.952769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.952811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.952852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.952888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.952927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.952965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.953003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.953046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.953083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.953124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.953166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.953214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.953263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.953305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.953356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.953399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.953442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.953485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.953541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.953584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.953628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.953675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.953860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.954229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.954273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.954319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.954366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.954410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.954457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.954500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.954540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.954587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.954632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.954666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.954698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.954736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.475 [2024-07-25 01:11:37.954771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.476 [2024-07-25 01:11:37.954811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.476 [2024-07-25 01:11:37.954849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.476 [2024-07-25 01:11:37.954888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.476 [2024-07-25 01:11:37.954925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.476 [2024-07-25 01:11:37.954961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.476 [2024-07-25 01:11:37.954998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.476 [2024-07-25 01:11:37.955035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.476 [2024-07-25 01:11:37.955077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.476 [2024-07-25 01:11:37.955124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.476 [2024-07-25 01:11:37.955167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.476 [2024-07-25 01:11:37.955199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.476 [2024-07-25 01:11:37.955235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.476 [2024-07-25 01:11:37.955274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.476 [2024-07-25 01:11:37.955318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.476 [2024-07-25 01:11:37.955362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.476 [2024-07-25 01:11:37.955404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.476 [2024-07-25 01:11:37.955446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.476 [2024-07-25 01:11:37.955478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.476 [2024-07-25 01:11:37.955512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.476 [2024-07-25 01:11:37.955547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.476 [2024-07-25 01:11:37.955587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.476 [2024-07-25 01:11:37.955630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.476 [2024-07-25 01:11:37.955668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.476 [2024-07-25 01:11:37.955705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.476 [2024-07-25 01:11:37.955744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.476 [2024-07-25 01:11:37.955780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.476 [2024-07-25 01:11:37.955819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.476 [2024-07-25 01:11:37.955857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.476 [2024-07-25 01:11:37.955895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.955935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.955970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.956017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.956067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.956111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.956153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.956200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.956243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.956286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.956329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.956376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.956420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.956459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.956500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.956551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.956594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.956636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.956687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.956730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.957236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.957281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.957318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.957355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.957391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.957427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.957465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.957501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.957540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.957577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.957612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.957654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.957692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.957723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.957760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.957794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.957828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.957869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.957906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.957945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.957978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.958015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.958055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.958098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.958144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.958181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.785 [2024-07-25 01:11:37.958219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.958263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.958301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.958337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.958373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.958408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.958445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.958473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.958509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.958545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.958587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.958629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.958675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.958718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.958764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.958806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.958850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.958898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.958941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.958983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.959034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.959079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.959120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.959161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.959205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.959246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.959289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.959331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.959375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.959424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.959464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.959502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.959544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.959584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.959625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.959663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.959702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.959738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.959907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.960314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.960360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.960404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.960445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.960486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.960531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.960575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.960623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.960665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.960709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.960751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.960793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.960840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.960880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.960928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.960972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.961016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.961061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.961110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.961158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.961205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.961247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.961291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.961335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.961376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.961422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.961463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.961507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.961547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.961595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.961639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.961683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.961731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.961770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.961815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.961860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.961906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.961948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.961993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.962034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.962084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.962125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.962177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.962218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.962259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.962308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.962349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.962393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.962433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.962471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.962514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.962560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.962600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.962640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.962682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.962714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.962755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.786 [2024-07-25 01:11:37.962789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.962826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.962863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.962900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.962938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.963412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.963458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.963496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.963532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.963562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.963600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.963638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.963675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.963711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.963755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.963794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.963836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.963875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.963913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.963953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.963993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.964032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.964079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.964129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.964174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.964218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.964269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.964310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.964355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.964398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.964444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.964489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.964523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.964562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.964598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.964636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.964671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.964709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.964747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.964790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.964835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.964866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.964905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.964941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.964978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.965014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.965059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.965094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.965134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.965162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.965191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.965228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.965268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.965307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.965347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.965384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.965421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.965460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.965504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.965548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.965592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.965634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.965677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.965726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.965767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.965809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.965856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.965895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.965940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.966121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.966483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.966528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.966572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.966616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.966657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.966699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.966749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.966786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.966828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.966883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.966924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.966967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.967011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.967056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.967102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.967147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.967189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.967229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.967265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.967304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.967344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.967389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.787 [2024-07-25 01:11:37.967438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.967477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.967515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.967554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.967584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.967622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.967669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.967716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.967762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.967804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.967841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.967880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.967918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.967959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.967999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.968038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.968076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.968116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.968151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.968192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.968230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.968268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.968306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.968347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.968385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.968424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.968461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.968500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.968539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.968579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.968620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.968666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.968708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.968751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.968793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.968843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.968886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.968932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.968974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.969016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.969536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.969585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.969634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.969683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.969726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.969771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.969813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.969853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.969882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.969919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.969961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.969997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.970034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.970082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.970128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.970166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.970204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.970241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.970278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.970317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.970364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.970394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.970435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.970471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.970508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.970541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.970578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.970616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.970656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.970696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.970735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.970775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.970815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.970855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.970892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.970925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.970974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.971018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.971065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.971119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.971162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.971204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.971246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.971288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.971336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.971384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.971426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.971471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.971523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.971566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.971611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.971660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.971708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.971751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.971797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.971838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.971883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.788 [2024-07-25 01:11:37.971929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.971976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.972021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.972073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.972119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.972161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.972208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.972381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.972714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.972761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.972803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.972841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.972882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.972923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.972953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.972989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.973028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.973072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.973107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.973143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.973182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.973218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.973256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.973301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.973343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.973383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.973421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.973461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.973492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.973526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.973566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.973601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.973641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.973685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.973726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.973776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.973823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.973867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.973914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.973961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.974006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.974059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.974100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.974146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.974196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.974243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.974284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.974328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.974371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.974413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.974453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.974494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.974540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.974581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.974621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.974660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.974699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.974735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.974771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.974814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.974856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.974897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.974933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.974968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.975008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.975048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.975086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.975123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.975160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.975197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.975713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.975763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.975807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.975853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.975895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.975938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.975979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.976023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.976069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.976112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.976160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.976206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.976249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.976295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.976340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.976386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.976429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.976480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.976522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.976564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.976612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.976656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.976699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.976742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.976789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.976833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.976880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.976920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.789 [2024-07-25 01:11:37.976957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.976994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.977031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.977078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.977109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.977148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.977187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.977223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.977261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.977298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.977336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.977373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.977419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.977465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.977507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.977546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.977585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.977617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.977653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.977691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.977728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.977766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.977803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.977840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.977879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.977916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.977958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.977995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.978037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.978083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.978124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.978166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.978204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.978249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.978291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.978336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.978528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:15.790 [2024-07-25 01:11:37.978900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.978949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.978992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.979045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.979087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.979133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.979182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.979225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.979268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.979306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.979335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.979371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.979413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.979449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.979485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.979522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.979561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.979600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.979643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.979694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.979738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.979784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.979816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.979859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.979895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.979934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.979971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.980009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.980050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.980091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.980130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.980168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.980207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.980244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.980286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.980326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.980372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.980417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.980461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.980510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.980551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.980597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.790 [2024-07-25 01:11:37.980651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.980694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.980738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.980782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.980825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.980868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.980914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.980956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.980999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.981050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.981092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.981135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.981178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.981224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.981270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.981315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.981369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.981411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.981454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.981499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.981972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.982012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.982051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.982094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.982133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.982168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.982209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.982248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.982297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.982344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.982376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.982410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.982447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.982485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.982526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.982564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.982602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.982641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.982679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.982715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.982751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.982810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.982853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.982897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.982947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.982988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.983030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.983082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.983126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.983170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.983218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.983263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.983307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.983355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.983396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.983437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.983487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.983530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.983570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.983617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.983661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.983704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.983754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.983796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.983838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.983889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.983934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.983981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.984029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.984079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.984121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.984169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.984210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.984250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.984290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.984331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.984361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.984398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.984437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.984474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.984519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.984568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.984607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.984648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.984818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.985166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.985208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.985247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.985287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.985326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.985362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.985398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.985439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.791 [2024-07-25 01:11:37.985489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.985533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.985575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.985617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.985660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.985701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.985744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.985787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.985830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.985873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.985916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.985958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.986002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.986051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.986095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.986135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.986176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.986222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.986267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.986308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.986352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.986394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.986431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.986470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.986510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.986551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.986589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.986626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.986664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.986702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.986738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.986774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.986808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.986844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.986882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.986923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.986952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.986990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.987025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.987063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.987104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.987141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.987181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.987219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.987256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.987297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.987333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.987373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.987409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.987450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.987496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.987541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.987585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.987628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.988133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.988182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.988228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.988270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.988312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.988358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.988394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.988439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.988494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.988537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.988578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.988621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.988668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.988712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.988755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.988795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.988843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.988891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.988930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.988969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.989007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.989037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.989079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.989119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.989158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.989197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.989239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.989279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.989315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.989354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.989391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.989434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.989470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.989510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.989542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.989581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.989620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.989657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.989692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.989728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.989760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.989797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.989831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.792 [2024-07-25 01:11:37.989875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.989914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.989953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.989994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.990032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.990074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.990115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.990154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.990194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.990243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.990289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.990332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.990379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.990420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.990463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.990512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.990556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.990602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.990644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.990686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.990735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.990917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.991275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.991324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.991363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.991393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.991436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.991474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.991509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.991551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.991591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.991630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.991667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.991704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.991744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.991781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.991819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.991858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.991891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.991926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.991960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.992002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.992047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.992086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.992124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.992171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.992208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.992246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.992283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.992322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.992350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.992382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.992421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.992459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.992503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.992545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.992586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.992629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.992675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.992721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.992763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.992804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.992845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.992891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.992941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.992982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.993027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.993080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.993124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.993166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.993210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.993251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.993293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.993337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.993381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.993427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.993469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.993511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.993560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.993604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.993650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.993691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.993734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.993779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.994272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.994316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.994356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.994395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.994433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.994475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.994515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.994553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.994591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.994627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.994665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.793 [2024-07-25 01:11:37.994714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.994754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.994802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.994850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.994893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.994938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.994984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.995027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.995077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.995122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.995160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.995203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.995247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.995292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.995334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.995378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.995421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.995470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.995509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.995555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.995607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.995650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.995696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.995746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.995792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.995835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.995881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.995925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.995968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.996010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.996057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.996101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.996149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.996192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.996229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.996266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.996302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.996340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.996369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.996405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.996442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.996482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.996525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.996563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.996602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.996638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.996680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.996718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.996754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.996796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.996844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.996873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.996913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.997090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.997493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.997540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.997583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.997628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.997669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.997712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.997758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.997801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.997844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.997890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.997933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.997973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.998019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.998065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.998111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.998161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.998205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.998252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.998290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.998324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.998356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.998391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.998429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.998465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.998501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.998544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.998591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.794 [2024-07-25 01:11:37.998628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:37.998667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:37.998707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:37.998744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:37.998784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:37.998813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:37.998852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:37.998886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:37.998928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:37.998968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:37.999004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:37.999051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:37.999090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:37.999130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:37.999169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:37.999207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:37.999247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:37.999280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:37.999321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:37.999362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:37.999404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:37.999449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:37.999491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:37.999532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:37.999573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:37.999618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:37.999662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:37.999705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:37.999748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:37.999789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:37.999833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:37.999876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:37.999926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:37.999968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.000014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.000527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.000574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.000617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.000661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.000704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.000748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.000794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.000837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.000878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.000917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.000960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.000999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.001033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.001075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.001110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.001148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.001185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.001222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.001268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.001312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.001361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.001405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.001446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.001487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.001528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.001558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.001600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.001640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.001680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.001717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.001756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.001796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.001826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.001863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.001900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.001936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.001976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.002015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.002056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.002101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.002137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.002179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.002227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.002274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.002316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.002359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.002405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.002446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.002486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.002534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.002577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.002622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.002669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.002713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.002756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.002802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.002843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.002881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.002931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.795 [2024-07-25 01:11:38.002972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.003018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.003064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.003108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.003152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.003331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.003668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.003709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.003744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.003783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.003826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.003873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.003909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.003951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.003987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.004024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.004066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.004104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.004144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.004184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.004227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.004266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.004303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.004340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.004378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.004407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.004449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.004488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.004524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.004565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.004610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.004655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.004699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.004743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.004787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.004829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.004870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.004914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.004956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.004997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.005040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.005090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.005131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.005174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.005217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.005261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.005305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.005354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.005396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.005437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.005477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.005520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.005558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.005594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.005632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.005670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.005706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.005744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.005785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.005823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.005858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.005900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.005933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.005970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.006007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.006045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.006084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.006122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.006632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.006679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.006724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.006769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.006812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.006855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.006903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.006944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.006985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.007033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.007081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.007131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.007171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.007221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.007262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.007307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.007359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.007400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.007443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.007486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.007527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.007572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.007610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.007657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.007700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.007742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.007791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.007836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.007877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.007919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.796 [2024-07-25 01:11:38.007961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.008015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.008061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.008102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.008142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.008182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.008224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.008263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.008299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.008328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.008366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.008406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.008447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.008494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.008540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.008583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.008631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.008676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.008715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.008753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.008792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.008829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.008859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.008897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.008937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.008975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.009011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.009050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.009098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.009134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.009169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.009207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.009246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.009285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.009457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.009841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.009890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.009939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.009983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.010023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.010078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.010116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.010162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.010205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.010250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.010292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.010343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.010387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.010428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.010468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.010510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.010549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.010585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.010621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.010660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.010689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.010731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.010771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.010809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.010843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.010885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.010922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.010962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.011002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.011038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.011084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.011122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.011161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.011196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.011236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.011277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.011321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.011368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.011416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.011456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.011503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.011548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.011597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.011638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.011685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.011734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.011776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.011819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.011864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.011908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.011949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.011991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.012036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.012080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.012132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.012171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.012213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.012251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.012281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.012324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.012361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.797 [2024-07-25 01:11:38.012401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.012903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.012945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.012985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.013024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.013072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.013113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.013149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.013193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.013229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.013271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.013319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.013363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.013408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.013451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.013493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.013547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.013590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.013634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.013676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.013719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.013768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.013810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.013850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.013896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.013944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.013987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.014029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.014083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.014131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.014177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.014224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.014268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.014311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.014362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.014402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.014444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.014492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.014534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.014577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.014626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.014668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.014714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.014766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.014813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.014856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.014900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.014947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.014994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.015034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.015083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.015123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.015158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.015189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.015229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.015266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.015306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.015344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.015381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.015418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.015456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.015493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.015529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.015567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.015602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.015796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.016137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.016185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.016225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.016263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.016302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.016338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.016375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.016419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.016458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.016501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.016547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.016587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.016630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.016675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.016719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.016762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.016806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.016848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.016890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.016935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.016979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.017025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.017079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.017123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.017162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.017204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.017247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.017292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.017334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.017376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.017420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.798 [2024-07-25 01:11:38.017461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.017493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.017528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.017566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.017604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.017639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.017679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.017719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.017758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.017798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.017835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.017870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.017903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.017941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.017980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.018018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.018058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.018102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.018140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.018175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.018219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.018257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.018296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.018334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.018372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.018412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.018453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.018496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.018538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.018580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.018626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.019127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.019170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.019214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.019259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.019300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.019343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.019385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.019431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.019474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.019505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.019545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.019583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.019619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.019660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.019698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.019735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.019777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.019821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.019860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.019904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.019941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.019984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.020020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.020061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.020099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.020137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.020177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.020210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.020251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.020291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.020328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.020368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.020410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.020450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.020488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.020528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.020564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.020603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.020645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.020687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.020730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.020773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.020816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.020858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.799 [2024-07-25 01:11:38.020903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.020954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.020998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.021045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.021089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.021140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.021183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.021228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.021274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.021323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.021365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.021409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.021453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.021497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.021540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.021586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.021630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.021673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.021715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.021756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.021933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.022290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.022331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.022370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.022408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.022441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.022480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.022514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.022551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.022590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.022626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.022664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.022703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.022738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.022778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.022822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.022868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.022910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.022952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.022987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.023026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.023071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.023107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.023149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.023182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.023217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.023255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.023291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.023331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.023370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.023407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.023448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.023491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.023528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.023569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.023615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.023659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.023701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.023756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.023798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.023844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.023891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.023933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.023978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.024026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.024072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.024122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.024167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.024210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.024258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.024300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.024331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.024369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.024412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.024450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.024491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.024537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.024576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.024618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.024659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.024698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.024737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.024770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.025293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.025336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.025374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.025413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.025451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.025497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.025541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.025586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.025632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.025678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.025722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.025765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.025811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.800 [2024-07-25 01:11:38.025855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.025898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.025939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.025986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.026029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.026076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.026120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.026174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.026219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.026266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.026315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:15.801 [2024-07-25 01:11:38.026357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.026401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.026444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.026486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.026529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.026572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.026617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.026658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.026699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.026741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.026787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.026829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.026876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.026919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.026961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.027006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.027050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.027088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.027129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.027166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.027208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.027251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.027289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.027328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.027363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.027398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.027429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.027464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.027497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.027534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.027572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.027617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.027660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.027698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.027737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.027778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.027817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.027848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.027885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.027920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.028116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.028481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.028528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.028572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.028619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.028661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.028705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.028750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.028792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.028837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.028882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.028924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.028968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.029011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.029056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.029097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.029143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.029186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.029229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.029286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.029329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.029371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.029416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.029461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.029502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.029539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.029576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.029606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.029641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.029676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.029716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.029755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.029796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.029832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.029870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.029907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.029949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.029979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.030022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.030064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.030105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.030144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.801 [2024-07-25 01:11:38.030182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.030219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.030257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.030294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.030333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.030369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.030404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.030441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.030478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.030531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.030575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.030618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.030660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.030702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.030744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.030792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.030838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.030880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.030922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.030963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.031009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.031500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.031543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.031583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.031613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.031649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.031687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.031724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.031760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.031800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.031842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.031882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.031920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.031949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.031977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.032004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.032032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.032077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.032116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.032156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.032192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.032229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.032268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.032306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.032344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.032387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.032429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.032476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.032518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.032575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.032617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.032663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.032714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.032759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.032801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.032841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.032880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.032917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.032955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.032993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.033029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.033072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.033110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.033148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.033186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.033225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.033261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.033303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.033345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.033388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.033435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.033480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.033524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.033572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.033618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.033660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.033705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.033745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.033789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.033838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.033882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.033926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.033969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.034012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.034060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.034251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.034609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.034653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.034696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.034737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.034780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.034826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.034869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.034916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.802 [2024-07-25 01:11:38.034960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.035005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.035051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.035098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.035142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.035191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.035234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.035278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.035325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.035364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.035405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.035446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.035488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.035533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.035574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.035611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.035650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.035687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.035732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.035765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.035806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.035845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.035880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.035918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.035954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.035992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.036028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.036074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.036115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.036153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.036191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.036227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.036257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.036297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.036334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.036384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.036429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.036467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.036504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.036541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.036578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.036617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.036653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.036690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.036729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.036762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.036799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.036836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.036874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.036908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.036949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.036989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.037027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.037072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.037109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.037598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.037644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.037689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.037740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.037784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.037829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.037875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.037919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.037955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.037987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.038025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.038067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.038107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.038143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.038184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.038224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.038271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.038312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.038349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.038392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.038421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.038462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.038498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.038534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.038571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.038611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.038653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.038692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.038730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.038771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.038817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.038859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.038897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.038935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.038976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.039016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.039054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.039096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.039132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.039172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.039216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.039265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.039309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.803 [2024-07-25 01:11:38.039354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.039402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.039443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.039487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.039535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.039577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.039619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.039668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.039707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.039748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.039784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.039819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.039860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.039898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.039936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.039979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.040017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.040059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.040103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.040157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.040201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.040395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.040742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.040787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.040829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.040871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.040913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.040961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.041010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.041056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.041101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.041145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.041187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.041237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.041275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.041319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.041360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.041400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.041448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.041493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.041536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.041579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.041619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.041665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.041706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.041754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.041799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.041842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.041888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.041932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.041974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.042022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.042072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.042113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.042161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.042211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.042253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.042293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.042338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.042380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.042422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.042461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.042504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.042545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.042591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.042629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.042663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.042698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.042735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.042786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.042824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.042861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.042899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.042935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.042971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.043007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.043051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.043094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.043139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.043169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.043212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.043247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.804 [2024-07-25 01:11:38.043289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.043329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.043366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.043915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.043957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.043996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.044032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.044071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.044114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.044159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.044201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.044242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.044296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.044337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.044381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.044432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.044473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.044517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.044561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.044603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.044649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.044691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.044735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.044780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.044823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.044861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.044898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.044932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.044972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.045020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.045070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.045110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.045149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.045184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.045225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.045261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.045303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.045343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.045386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.045427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.045459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.045500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.045538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.045578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.045622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.045662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.045698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.045736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.045774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.045809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.045850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.045895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.045938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.045990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.046032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.046081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.046125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.046169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.046213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.046259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.046297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.046340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.046383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.046425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.046467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.046514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.046558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.047058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.047107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.047151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.047192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.047237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.047284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.047326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.047370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.047412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.047450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.047488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.047534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.047573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.047610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.047639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.047677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.047715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.047754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.047793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.047831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.047868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.047905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.047944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.047985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.048021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.048061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.048106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.048138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.805 [2024-07-25 01:11:38.048173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.048210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.048244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.048288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.048326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.048363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.048403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.048442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.048481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.048521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.048560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.048600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.048644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.048693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.048737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.048784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.048828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.048872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.048920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.048972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.049015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.049064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.049107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.049152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.049196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.049238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.049280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.049323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.049375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.049418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.049461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.049515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.049556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.049597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.049641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.050136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.050180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.050218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.050261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.050304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.050341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.050376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.050405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.050441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.050479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.050515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.050549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.050586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.050627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.050665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.050704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.050743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.050779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.050818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.050856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.050885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.050926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.050966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.051004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.051047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.051090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.051143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.051189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.051231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.051281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.051325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.051370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.051412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.051457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.051502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.051545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.051588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.051636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.051681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.051728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.051772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.051818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.051863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.051903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.051948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.051987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.052028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.052074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.052117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.052158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.052195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.052235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.052273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.052308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.052346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.052389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.052425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.052462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.052500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.806 [2024-07-25 01:11:38.052537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.052573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.052614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.052651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.052687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.053181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.053228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.053269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.053312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.053353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.053405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.053450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.053497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.053545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.053584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.053630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.053678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.053721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.053769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.053815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.053857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.053900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.053945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.053987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.054029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.054083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.054124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.054168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.054224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.054267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.054314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.054353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.054391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.054430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.054468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.054498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.054537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.054575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.054614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.054649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.054688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.054723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.054762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.054805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.054845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.054883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.054930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.054970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.055001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.055039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.055080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.055116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.055153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.055199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.055233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.055269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.055312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.055350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.055390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.055428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.055462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.055501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.055543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.055583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.055623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.055663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.055702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.055747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.056245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.056292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.056337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.056377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.056424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.056467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.056513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.056557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.056599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.056640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.056677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.056718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.056760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.056791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.056825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.056864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.056901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.056942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.056979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.057016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.057056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.057095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.057125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.057163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.057202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.057242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.057282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.057320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.057359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.807 [2024-07-25 01:11:38.057399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.057440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.057480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.057527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.057571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.057614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.057657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.057700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.057741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.057789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.057833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.057874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.057921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.057960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.058001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.058031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.058072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.058113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.058151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.058197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.058245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.058280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.058324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.058361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.058404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.058442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.058479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.058518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.058556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.058604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.058643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.058683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.058743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.058789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.058833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.059332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.059380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.059422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.059460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.059500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.059539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.059577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.059615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.059653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.059697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.059737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.059778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.059816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.059854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.059899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.059930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.059964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.060005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.060040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.060086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.060124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.060163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.060202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.060240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.060285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.060331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.060374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.060419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.060462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.060513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.060554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.060598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.060641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.060688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.060733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.060778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.060827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.060876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.060920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.060967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.061016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.061064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.061111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.061154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.061206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.061249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.061291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.061341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.808 [2024-07-25 01:11:38.061385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.061427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.061468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.061507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.061552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.061590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.061628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.061663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.061704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.061743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.061787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.061832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.061872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.061910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.061947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.062547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.062589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.062623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.062668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.062712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.062756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.062804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.062849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.062892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.062936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.062977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.063021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.063072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.063114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.063158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.063201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.063246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.063298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.063345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.063388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.063437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.063478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.063523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.063577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.063622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.063669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.063715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.063760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.063801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.063851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.063893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.063937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.063983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.064025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.064067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.064109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.064140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.064179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.064218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.064263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.064304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.064343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.064385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.064421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.064458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.064496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.064532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.064570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.064613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.064645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.064687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.064725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.064769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.064812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.064854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.064896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.064931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.064972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.065010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.065052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.065092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.065137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.065177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.065216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.065718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.065765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.065809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.065857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.065901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.065946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.065992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.066035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.066086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.066133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.066184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.066230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.066273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.066323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.066365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.066413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.809 [2024-07-25 01:11:38.066457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.066504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.066548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.066594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.066640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.066685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.066728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.066773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.066817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.066857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.066895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.066934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.066970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.067000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.067040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.067084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.067123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.067161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.067197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.067255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.067294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.067334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.067372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.067410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.067455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.067494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.067531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.067569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.067604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.067640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.067680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.067717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.067753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.067790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.067828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.067864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.067900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.067942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.067981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.068024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.068081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.068123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.068167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.068216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.068261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.068305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.068347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.068836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.068879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.068917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.068963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.069006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.069049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.069091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.069128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.069158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.069195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.069233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.069269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.069310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.069345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.069381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.069419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.069464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.069505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.069545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.069588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.069626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.069665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.069704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.069744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.069785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.069827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.069885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.069934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.069976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.070021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.070074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.070118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.070160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.070204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.070242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.070282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.070330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.070365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.070406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.070445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.070485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.070523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.070563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.070602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.070641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.070676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.070715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.070759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.070810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.070853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.070895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.810 [2024-07-25 01:11:38.070937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.070980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.071027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.071076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.071119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.071169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.071211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.071260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.071303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.071347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.071390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.071435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.071481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.071967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.072011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.072060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.072110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.072154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.072197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.072245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.072289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.072331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.072380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.072425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.072475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.072523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.072568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.072613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.072657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.072704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.072756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.072803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.072849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.072893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.072933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.072975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.073013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.073058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.073100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.073137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.073173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.073210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.073248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.073288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.073328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.073365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.073404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.073443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.073482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.073520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.073577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.073618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.073649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.073687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.073728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.073767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.073804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.073844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.073888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.073932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.073971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.074009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.074057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.074098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.074131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.074168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.074205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.074240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.074279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.074325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.074370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.074409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.074451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.074488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.074527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.074565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.075083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.075135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.075180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.075228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.075270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.075313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.075346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.075387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.075431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.075467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.075504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.075544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.075589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.075631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.075670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.075708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:15.811 [2024-07-25 01:11:38.075745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.075783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.075814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.075852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.075897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.811 [2024-07-25 01:11:38.075938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.075979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.076024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.076064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.076103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.076140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.076182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.076220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.076261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.076300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.076340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.076378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.076417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.076463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.076509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.076556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.076599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.076642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.076686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.076733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.076778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.076819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.076859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.076901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.076939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.076985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.077025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.077064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.077106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.077145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.077184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.077226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.077268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.077308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.077340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.077383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.077429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.077474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.077526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.077568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.077613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.077655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.077704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.078208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.078256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.078309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.078354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.078398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.078445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.078490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.078531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.078574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.078620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.078667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.078715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.078766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.078809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.078852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.078902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.078945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.078989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.079030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.079079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.079125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.079169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.079217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.079257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.079301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.079345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.079387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.079435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.079479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.079523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.079568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.079613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.079656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.079696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.079736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.079776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.079813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.079846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.079887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.079924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.079963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.079999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.080040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.080087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.080127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.080165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.080210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.080254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.080295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.080325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.080365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.080402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.080439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.080482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.080526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.812 [2024-07-25 01:11:38.080565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.080603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.080647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.080691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.080734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.080773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.080806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.080842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.081373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.081425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.081474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.081519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.081565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.081611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.081657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.081702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.081747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.081801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.081847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.081893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.081946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.081989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.082032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.082082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.082124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.082157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.082197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.082239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.082281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.082321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.082358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.082395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.082435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.082474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.082519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.082565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.082607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.082637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.082677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.082714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.082754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.082794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.082834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.082876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.082929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.082971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.083009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.083052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.083091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.083129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.083170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.083210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.083254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.083297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.083344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.083387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.083433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.083479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.083525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.083571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.083615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.083666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.083711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.083757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.083806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.083849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.083891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.083934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.083986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.084032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.084080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.084129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.084608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.084655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.084698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.084758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.084803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.084845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.084895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.084939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.084981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.085025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.813 [2024-07-25 01:11:38.085067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.085107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.085154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.085192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.085232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.085270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.085309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.085352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.085384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.085423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.085460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.085499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.085537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.085573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.085620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.085661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.085700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.085738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.085780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.085823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.085859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.085893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.085936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.085981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.086020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.086060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.086102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.086144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.086186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.086224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.086260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.086301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.086341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.086387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.086435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.086479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.086524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.086574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.086623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.086665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.086705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.086749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.086800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.086845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.086890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.086933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.086974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.087022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.087070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.087114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.087157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.087199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.087241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.087750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.087796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.087833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.087873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.087913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.087954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.087993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.088029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.088071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.088113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.088153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.088192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.088236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.088277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.088315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.088355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.088398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.088443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.088486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.088527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.088569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.088613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.088662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.088709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.088753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.088802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.088851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.088894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.088939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.088991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.089036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.089084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.089130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.089174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.089217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.089267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.089315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.089360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.089404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.089449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.089509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.089555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.089602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.089649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.089695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.814 [2024-07-25 01:11:38.089735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.089778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.089826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.089866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.089909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.089941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.089982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.090025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.090072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.090116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.090158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.090201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.090239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.090277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.090315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.090355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.090394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.090425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.090459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.091020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.091076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.091120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.091169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.091212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.091258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.091308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.091361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.091407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.091451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.091496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.091543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.091588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.091629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.091675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.091718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.091762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.091814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.091859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.091902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.091946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.091988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.092032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.092080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.092128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.092171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.092208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.092247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.092278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.092321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.092355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.092390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.092429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.092476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.092521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.092562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.092600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.092638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.092675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.092714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.092753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.092793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.092825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.092860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.092902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.092935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.092974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.093016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.093062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.093110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.093149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.093190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.093232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.093274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.093310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.093354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.093401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.093447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.093488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.093536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.093580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.093626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.093677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.094180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.094231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.094272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.094317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.094365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.094409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.094451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.094500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.094543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.094586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.094626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.094670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.094709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.094741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.094778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.094815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.815 [2024-07-25 01:11:38.094865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.094906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.094943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.094980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.095020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.095062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.095101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.095139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.095178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.095222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.095253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.095291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.095329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.095369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.095403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.095440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.095480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.095524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.095564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.095603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.095641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.095677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.095719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.095758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.095804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.095851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.095895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.095938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.095987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.096033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.096081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.096126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.096173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.096213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.096270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.096314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.096358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.096401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.096454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.096497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.096539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.096586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.096628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.096671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.096719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.096765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.096807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.096855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.097338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.097390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.097430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.097468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.097508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.097545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.097585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.097618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.097657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.097695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.097733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.097772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.097806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.097844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.097879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.097928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.097964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.098002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.098050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.098092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.098127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.098167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.098213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.098257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.098303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.098343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.098400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.098444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.098491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.098543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.098587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.098627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.098669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.098711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.098740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.098777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.098815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.098853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.098892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.098931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.098967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.099008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.099049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.099087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.099127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.099168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.099208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.099256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.099306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.816 [2024-07-25 01:11:38.099350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.099394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.099440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.099491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.099533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.099575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.099619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.099663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.099708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.099752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.099796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.099842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.099885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.099925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.100407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.100448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.100488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.100528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.100562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.100601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.100646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.100690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.100732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.100779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.100824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.100869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.100920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.100962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.101006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.101050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.101092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.101136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.101182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.101225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.101272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.101314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.101367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.101412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.101458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.101502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.101549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.101595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.101641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.101689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.101731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.101778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.101828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.101871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.101916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.101957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.102003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.102046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.102087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.102135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.102178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.102217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.102248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.102286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.102324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.102361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.102398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.102433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.102479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.102521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.102572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.102610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.102641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.102679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.102722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.102759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.102801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.102839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.102881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.102923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.102962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.103004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.103041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.103086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.103590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.103639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.103681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.103726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.103780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.103827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.103869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.103916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.817 [2024-07-25 01:11:38.103959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.104000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.104049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.104099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.104143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.104187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.104233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.104275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.104322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.104367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.104406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.104452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.104499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.104536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.104572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.104605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.104641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.104682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.104731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.104768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.104806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.104844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.104883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.104922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.104958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.105005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.105049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.105085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.105122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.105163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.105207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.105247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.105285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.105323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.105359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.105401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.105436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.105474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.105514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.105553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.105590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.105632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.105674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.105715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.105754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.105789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.105825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.105867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.105908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.105956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.106004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.106052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.106102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.106143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.106187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.106695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.106744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.106791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.106835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.106886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.106932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.106976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.107022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.107075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.107125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.107163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.107201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.107235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.107276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.107317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.107357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.107397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.107433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.107471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.107508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.107546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.107593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.107632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.107670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.107713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.107746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.107781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.107821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.107861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.107900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.107937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.107977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.108022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.108071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.108109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.108147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.108186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.108226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.108265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.818 [2024-07-25 01:11:38.108304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.108347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.108392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.108436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.108473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.108513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.108551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.108598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.108645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.108691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.108734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.108779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.108821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.108865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.108908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.108960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.109006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.109056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.109109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.109152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.109194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.109236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.109282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.109326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.109371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.109852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.109895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.109942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.109986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.110032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.110083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.110129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.110171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.110218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.110261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.110305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.110339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.110379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.110418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.110455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.110494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.110533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.110576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.110614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.110652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.110700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.110738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.110779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.110819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.110854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.110899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.110935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.110981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.111020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.111065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.111110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.111149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.111187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.111222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.111258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.111294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.111332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.111371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.111410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.111451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.111492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.111532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.111574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.111614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.111654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.111692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.111735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.111778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.111820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.111869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.111915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.111956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.112004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.112062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.112113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.112157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.112199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.112245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.112301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.112342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.112385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.112433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.112478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.112967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.113010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.113051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.113090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.113129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.113175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.113212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.113250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.113287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.113325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.819 [2024-07-25 01:11:38.113363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.113416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.113460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.113491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.113528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.113565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.113611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.113659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.113698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.113729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.113764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.113806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.113843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.113882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.113921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.113961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.114004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.114046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.114086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.114117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.114159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.114199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.114238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.114273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.114306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.114356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.114399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.114442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.114491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.114536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.114582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.114626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.114675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.114719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.114768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.114812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.114859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.114905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.114949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.114992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.115038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.115090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.115133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.115181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.115227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.115271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.115317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.115362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.115404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.115449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.115493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.115536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.115583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.115632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.116126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.116168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.116206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.116248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.116284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.116323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.116363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.116399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.116442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.116483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.116522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.116563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.116602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.116638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.116677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.116712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.116751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.116789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.116828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.116861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.116904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.116944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.116984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.117028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.117077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.117115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.117153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.117195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.117232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.117275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.117324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.117364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.117406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.117458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.117499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.117542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.117597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.117640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.117684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.117729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.117772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.117813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.117858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.117900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.820 [2024-07-25 01:11:38.117942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.117989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.118037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.118084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.118131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.118176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.118220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.118267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.118311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.118355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.118404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.118447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.118491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.118536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.118590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.118634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.118677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.118729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.118771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.119312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.119354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.119398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.119444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.119482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.119513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.119548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.119583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.119623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.119661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.119698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.119736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.119779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.119815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.119859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.119900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.119939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.119974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.120015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.120051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.120087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.120125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.120168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.120208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.120251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.120293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.120343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.120386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.120432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.120477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.120521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.120567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.120611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.120662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.120706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.120748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.120794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.120842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.120886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.120929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.120969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.121004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.121053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.121103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.121141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.121177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.121220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.121262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.121298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.121337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.121376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.121415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.121455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.121497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.121533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.121572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.121612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.121650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.121690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.121727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.121760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.121803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.121847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.121890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.122401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.122448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.122493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.122540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.122583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.122624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.122670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.122713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.122758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.122807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.122849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.122894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.821 [2024-07-25 01:11:38.122936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.122986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.123028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.123077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.123123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.123169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.123213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.123260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.123305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.123344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.123383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.123429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.123465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.123503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.123542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.123573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.123611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.123648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.123684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.123722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.123760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.123799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.123837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.123881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.123922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.123964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.124002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.124038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.124073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.124109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.124149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.124191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.124232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.124269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.124308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.124345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.124387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.124425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.124463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.124506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.124546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.124586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.124625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.124669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.124706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.124751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.124790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.124833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.124876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.124920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.124974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.125481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.125529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.125577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.125622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.125666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.125708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.125748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:15.822 [2024-07-25 01:11:38.125788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.125833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.125874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.125904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.125942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.125978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.126016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.126059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.126108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.126154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.126195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.126234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.126280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.126321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.126351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.126391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.126428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.126466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.822 [2024-07-25 01:11:38.126501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.126539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 true 00:11:15.823 [2024-07-25 01:11:38.126577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.126619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.126657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.126696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.126734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.126771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.126810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.126839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.126876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.126913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.126951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.126993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.127030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.127067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.127113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.127154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.127203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.127252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.127293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.127338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.127386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.127429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.127477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.127520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.127567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.127612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.127660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.127703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.127747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.127793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.127836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.127882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.127943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.127989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.128032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.128085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.128129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.128610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.128651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.128696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.128739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.128779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.128810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.128848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.128888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.128926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.128966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.129000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.129038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.129082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.129121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.129164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.129202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.129238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.129280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.129320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.129364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.129410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.129464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.129506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.129551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.129599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.129643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.129690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.129737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.129777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.129821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.129865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.129911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.129953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.130006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.130052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.130096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.130147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.130189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.130234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.130285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.130327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.130369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.130414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.130458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.130502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.130553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.130593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.130638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.130689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.130733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.130777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.130827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.130870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.130915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.130963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.131013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.131057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.131102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.823 [2024-07-25 01:11:38.131147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.131191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.131237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.131281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.131324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.131796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.131836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.131874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.131911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.131951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.131988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.132030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.132080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.132121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.132159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.132203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.132232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.132267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.132306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.132345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.132382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.132421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.132457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.132496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.132537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.132574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.132605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.132638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.132681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.132721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.132761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.132802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.132841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.132878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.132926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.132973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.133014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.133061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.133107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.133149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.133195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.133243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.133287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.133327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.133374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.133416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.133460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.133506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.133551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.133598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.133640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.133682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.133713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.133756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.133797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.133833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.133871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.133909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.133946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.133984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.134026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.134072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.134111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.134144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.134180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.134216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.134254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.134295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.134334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.134853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.134896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.134941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.134983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.135028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.135078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.135124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.135165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.135209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.135252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.135296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.135341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.135385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.135429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.135480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.135526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.135572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.135619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.135660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.135705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.135751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.135793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.135837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.135882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.135926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.135973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.136020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.824 [2024-07-25 01:11:38.136062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.136103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.136142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.136181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.136221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.136266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.136312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.136357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.136397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.136427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.136468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.136501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.136540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.136579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.136620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.136658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.136688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.136726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.136764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.136801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.136848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.136886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.136927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.136967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.137004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.137059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.137102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.137145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.137192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.137239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.137285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.137333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.137378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.137422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.137464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.137509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.138024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.138073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.138126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.138166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.138209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.138260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.138301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.138344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.138388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.138427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.138466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.138509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.138552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.138591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.138629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.138658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.138694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.138733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.138774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.138817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.138859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.138899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.138939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.138974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.139012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.139054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.139093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.139135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.139172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.139210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.139249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.139285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.139324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.139361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.139400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.139441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.139486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.139529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.139579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.139626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.139666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.139709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.139750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.139795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.139832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.139862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.139904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.139941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.139977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.140015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.140061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.140101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.140141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.140180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.140225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.140271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.140317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.825 [2024-07-25 01:11:38.140356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.140402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.140446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.140493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.140533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.140578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.140623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.141122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.141165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.141200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.141237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.141275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.141312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.141352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.141389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.141427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.141467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.141508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.141546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.141582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.141626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.141667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.141710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.141755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.141799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.141850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.141895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.141938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.141992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.142033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.142078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.142127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.142168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.142214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.142261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.142310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.142357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.142410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.142455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.142499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.142540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.142584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.142631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.142672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.142718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.142760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.142803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.142840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.142878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.142907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.142948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.142988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.143021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.143065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.143109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.143150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.143193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.143232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.143271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.143309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.143348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.143386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.143415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.143456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.143497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.143537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.143573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.143608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.143646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.143684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.144198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.144244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.144295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.144337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.144380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.144432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.144474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.144519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.144571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.144614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.144660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.144709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.144753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.144796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.144845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.144888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.144931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.144973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.145014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.145062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.145104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.145148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.145189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.145236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.145277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.145323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.145379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.145426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.145467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.145506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.145545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.145589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.145629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.826 [2024-07-25 01:11:38.145665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.145695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.145736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.145773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.145809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.145848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.145887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.145924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.145960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.145998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.146047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.146089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.146128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.146168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.146198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.146237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.146277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.146318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.146363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.146400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.146438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.146474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.146508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.146545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.146585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.146626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.146663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.146705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.146742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.146781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.146818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.147313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.147365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.147406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.147447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.147492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.147536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.147579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.147622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.147663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.147704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.147749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.147791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.147829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.147868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.147898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.147938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.147981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.148018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.148070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.148107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.148144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.148180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.148216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.148260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.148305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.148342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.148379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.148417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.148456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.148493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.148533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.148574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.148614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.148661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.148700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.148739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.148780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.148823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.148863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.148903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.148950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.148996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.149039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.149088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.149137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.149176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.149220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.149268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.149309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.149350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.149398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.149441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.149488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.149538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.149583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.149624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.149668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.149716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.149760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.827 [2024-07-25 01:11:38.149809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.149852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.149893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.149938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.150429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.150473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.150514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.150556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.150597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.150634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.150665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.150700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.150736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.150773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.150809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.150844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.150881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.150918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.150961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.150997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.151036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.151075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.151115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.151153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.151190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.151231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.151277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.151324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.151364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.151404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.151451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.151494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.151540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.151586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.151625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.151668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 01:11:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 789518 00:11:15.828 [2024-07-25 01:11:38.151706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.151748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.151789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.151827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 01:11:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:15.828 [2024-07-25 01:11:38.151867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.151898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.151934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.151973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.152015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.152058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.152096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.152133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.152168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.152211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.152257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.152299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.152342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.152385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.152430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.152480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.152524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.152566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.152609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.152653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.152697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.152741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.152787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.152831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.152881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.152928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.152971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.153016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.153497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.153547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.153591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.153632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.153670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.153709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.153748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.153785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.153824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.153860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.153900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.153940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.153978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.154014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.154056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.154100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.154136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.154172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.154209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.154247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.154287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.154333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.154378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.154422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.154467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.154510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.154559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.154604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.154656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.154697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.154742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.154784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.154839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.154883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.154929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.828 [2024-07-25 01:11:38.154975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.155028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.155075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.155120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.155167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.155210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.155249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.155296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.155339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.155384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.155429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.155477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.155520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.155561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.155601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.155637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.155675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.155711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.155755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.155785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.155823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.155865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.155904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.155943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.155981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.156019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.156070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.156110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.156626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.156667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.156706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.156749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.156798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.156840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.156882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.156933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.156978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.157020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.157070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.157117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.157160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.157206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.157250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.157294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.157341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.157387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.157429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.157476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.157521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.157562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.157609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.157655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.157699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.157748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.157792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.157835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.157885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.157926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.157969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.158024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.158072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.158118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.158155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.158194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.158233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.158266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.158306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.158347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.158389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.158430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.158469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.158511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.158550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.158589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.158627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.158664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.158714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.158754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.158788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.158829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.158868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.158904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.158944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.158978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.159016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.159056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.159096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.159137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.159175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.159217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.159257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.159297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.159791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.159841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.159888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.159931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.159975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.160019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.160069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.160115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.829 [2024-07-25 01:11:38.160159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.830 [2024-07-25 01:11:38.160210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.830 [2024-07-25 01:11:38.160250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.830 [2024-07-25 01:11:38.160296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.830 [2024-07-25 01:11:38.160338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.830 [2024-07-25 01:11:38.160383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.830 [2024-07-25 01:11:38.160427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.830 [2024-07-25 01:11:38.160471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.830 [2024-07-25 01:11:38.160512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.830 [2024-07-25 01:11:38.160557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.830 [2024-07-25 01:11:38.160598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.830 [2024-07-25 01:11:38.160638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.830 [2024-07-25 01:11:38.160678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.830 [2024-07-25 01:11:38.160721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.830 [2024-07-25 01:11:38.160751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.830 [2024-07-25 01:11:38.160790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.830 [2024-07-25 01:11:38.160830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.830 [2024-07-25 01:11:38.160869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.830 [2024-07-25 01:11:38.160907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.830 [2024-07-25 01:11:38.160944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.830 [2024-07-25 01:11:38.160983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.830 [2024-07-25 01:11:38.161019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.830 [2024-07-25 01:11:38.161061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.830 [2024-07-25 01:11:38.161109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.830 [2024-07-25 01:11:38.161145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.830 [2024-07-25 01:11:38.161185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.830 [2024-07-25 01:11:38.161223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.830 [2024-07-25 01:11:38.161254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.830 [2024-07-25 01:11:38.161291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.830 [2024-07-25 01:11:38.161328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.830 [2024-07-25 01:11:38.161368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.830 [2024-07-25 01:11:38.161405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.161442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.161482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.161522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.161565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.161605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.161646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.161683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.161723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.161765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.161815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.161859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.161904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.161951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.161993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.162034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.162082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.162130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.162174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.162223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.162268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.162313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.162363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.162403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.162873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.162915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.162958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.162996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.163038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.163082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.163124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.163162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.163209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.163248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.163287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.163337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.163368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.163406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.163442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.163477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.163514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.163554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.163595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.163635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.163675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.163713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.163752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.163791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.163830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.163866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.163906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.831 [2024-07-25 01:11:38.163956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.163999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.164048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.164098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.164148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.164192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.164236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.164289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.164331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.164373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.164416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.164454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.164492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.164528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.164568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.164607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.164642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.164681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.164715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.164754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.164791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.164828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.164870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.164918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.164961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.165006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.165056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.165099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.165144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.165190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.165232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.165275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.165324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.165368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.165412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.165454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.165500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.165979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.166023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.166058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.166100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.166137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.166175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.166214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.166253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.166289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.166327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.166369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.166407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.166441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.166480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.166516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.166555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.166594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.166630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.166669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.166707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.166752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.166798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.166838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.166881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.166929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.166970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.167017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.167069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.167114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.167158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.167204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.167247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.167292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.167335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.167387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.167432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.167476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.167528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.832 [2024-07-25 01:11:38.167571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.167620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.167670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.167712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.167756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.167801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.167845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.167885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.167927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.167966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.168002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.168039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.168075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.168115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.168150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.168190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.168232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.168268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.168305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.168345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.168381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.168421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.168459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.168497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.168535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.169036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.169091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.169136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.169180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.169233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.169277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.169319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.169360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.169407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.169447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.169492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.169539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.169595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.169631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.169672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.169709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.169750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.169784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.169823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.169864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.169906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.169948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.169987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.170026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.170069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.170112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.170151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.170185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.170220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.170256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.170297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.170334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.170375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.170413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.170452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.170493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.170536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.170582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.170628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.170670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.170716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.170761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.170806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.170851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.170900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.170941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.170981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.171017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.171059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.171090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.171134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.171173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.171211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.171253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.171294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.171336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.171375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.171416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.171455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.171499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.171533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.171578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.171622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.171675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.172176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.172223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.172265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.172312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.172358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.172402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.833 [2024-07-25 01:11:38.172450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.172494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.172544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.172590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.172635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.172675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.172715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.172755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.172797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.172829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.172867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.172904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.172945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.172985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.173031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.173076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.173115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.173162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.173198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.173233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.173270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.173306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.173343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.173380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.173416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.173449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.173486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.173528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.173570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.173614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.173661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.173704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.173748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.173792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.173846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.173887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.173927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.173969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.174017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.174063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.174109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.174151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.174200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.174246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.174287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.174330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.174380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.174424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.174467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.174512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.174548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.174587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.174626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.174670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.174708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.174749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.174788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:15.834 [2024-07-25 01:11:38.175347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.175390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.175429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.175467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.175509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.175546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.175587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.175626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.175666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.175709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.175753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.175795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.175846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.175888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.175932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.175980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.176031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.176079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.176120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.176161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.176209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.176250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.176294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.176341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.176382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.176424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.176470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.176515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.834 [2024-07-25 01:11:38.176559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.176600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.176633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.176672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.176709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.176745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.176783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.176820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.176858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.176903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.176941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.176981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.177019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.177058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.177100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.177129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.177168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.177203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.177238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.177274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.177322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.177360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.177398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.177439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.177478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.177519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.177553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.177595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.177641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.177692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.177731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.177775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.177820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.177863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.177910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.177958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.178456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.178507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.178552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.178596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.178644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.178689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.178732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.178778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.178823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.178869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.178916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.178955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.178997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.179047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.179092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.179135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.179180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.179227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.179272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.179318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.179363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.179408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.179452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.179496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.179541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.179586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.179632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.179673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.179713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.179752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.179796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.179836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.179876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.179913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.179954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.179994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.180024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.180065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.180103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.180145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.180182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.180222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.180268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.180308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.180348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.180385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.180431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.180477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.180508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.180543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.180581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.180620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.180659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.180697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.180736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.180781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.180823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.180861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.180899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.180934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.180974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.181012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.835 [2024-07-25 01:11:38.181056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.181555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.181600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.181631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.181675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.181720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.181758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.181795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.181835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.181876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.181916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.181954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.181996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.182031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.182073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.182111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.182145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.182184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.182212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.182246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.182284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.182315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.182343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.182372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.182401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.182430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.182463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.182504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.182544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.182583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.182625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.182664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.182702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.182736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.182773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.182818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.182860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.182906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.182950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.182994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.183039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.183087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.183135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.183177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.183221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.183263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.183305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.183345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.183394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.183434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.183477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.183518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.183561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.183610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.183653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.183696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.183740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.183778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.183819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.183855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.183900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.183945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.183984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.184024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.184061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.184616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.184662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.184708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.184755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.184798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.184840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.184882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.184932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.184980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.185024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.185070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.185116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.185161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.185207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.185250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.185293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.185338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.185381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.185424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.185469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.185511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.185556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.185605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.185650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.836 [2024-07-25 01:11:38.185693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.185736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.185781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.185824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.185869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.185913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.185957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.185998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.186046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.186090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.186133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.186184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.186226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.186269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.186316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.186363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.186405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.186450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.186494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.186535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.186581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.186629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.186673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.186722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.186768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.186814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.186855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.186894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.186930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.186970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.187016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.187059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.187099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.187143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.187184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.187214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.187255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.187293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.187331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.187810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.187854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.187894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.187932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.187968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.188009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.188047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.188084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.188123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.188164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.188199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.188239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.188275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.188314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.188352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.188392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.188434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.188476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.188520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.188562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.188610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.188648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.188688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.188728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.188773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.188819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.188855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.188890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.188929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.188971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.189009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.189052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.189082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.189119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.189148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.189178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.189214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.189254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.189286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.189324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.189368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.189404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.837 [2024-07-25 01:11:38.189434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.189462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.189492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.189521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.189550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.189578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.189607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.189635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.189664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.189704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.189750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.189788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.189827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.189868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.189908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.190002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.190052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.190096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.190138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.190181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.190221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.190266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.190757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.190805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.190854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.190894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.190938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.190981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.191025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.191072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.191120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.191162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.191207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.191257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.191298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.191342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.191385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.191428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.191472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.191515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.191556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.191600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.191644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.191688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.191737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.191777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.191822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.191864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.191902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.191943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.191981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.192023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.192067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.192097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.192140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.192178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.192217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.192256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.192294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.192331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.192371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.192408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.192452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.192497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.192537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.192569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.192605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.192641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.192680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.192716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.192753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.192793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.192833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.192873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.192912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.192953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.192991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.193031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.193070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.193109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.193149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.193195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.193237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.193282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.193326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.193826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.193878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.193916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.193963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.194002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.194048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.194094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.194133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.194176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.194222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.194266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.194309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.194356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.838 [2024-07-25 01:11:38.194407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.194455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.194499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.194544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.194589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.194633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.194679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.194724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.194762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.194806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.194850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.194887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.194922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.194958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.194997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.195037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.195080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.195120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.195159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.195199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.195239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.195281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.195322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.195360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.195399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.195433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.195471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.195513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.195559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.195601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.195640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.195675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.195713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.195754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.195791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.195833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.195872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.195909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.195946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.195982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.196020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.196070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.196113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.196156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.196199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.196240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.196288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.196330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.196375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.196422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.196464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.196947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.196988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.197026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.197069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.197111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.197156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.197194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.197234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.197266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.197303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.197341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.197378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.197414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.197453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.197489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.197527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.197569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.197608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.197649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.197686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.197724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.197754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.197794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.197833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.197875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.197914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.197948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.197990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.198029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.198078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.198121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.198167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.198208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.198253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.198294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.198337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.198380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.198431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.198471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.198507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.198553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.198592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.198624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.198654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.198692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.198732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.198770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.839 [2024-07-25 01:11:38.198809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.198849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.198890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.198930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.198970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.199012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.199064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.199109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.199151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.199197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.199244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.199285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.199334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.199382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.199425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.199473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.199969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.200013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.200074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.200116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.200158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.200213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.200256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.200298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.200339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.200384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.200430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.200475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.200520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.200564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.200606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.200649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.200696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.200738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.200781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.200826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.200864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.200913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.200955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.200998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.201048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.201093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.201135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.201176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.201214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.201254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.201293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.201336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.201380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.201417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.201455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.201501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.201549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.201588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.201620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.201664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.201705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.201747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.201788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.201825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.201862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.201907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.201949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.201988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.202025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.202070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.202102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.202140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.202176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.202216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.202254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.202292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.202329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.202368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.202407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.202437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.202476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.202517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.202558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.202593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.203116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.203168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.203211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.203254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.203299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.203345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.203391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.203434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.203480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.203522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.203570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.203604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.203641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.203681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.203719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.203755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.203803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.203844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.840 [2024-07-25 01:11:38.203885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.203922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.203953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.203991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.204028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.204070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.204109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.204148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.204190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.204226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.204268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.204307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.204347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.204388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.204426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.204458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.204499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.204536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.204570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.204613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.204656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.204701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.204749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.204792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.204833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.204878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.204923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.204965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.205008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.205061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.205105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.205154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.205202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.205243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.205289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.205336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.205379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.205422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.205470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.205517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.205559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.205603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.205650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.205697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.205743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.206227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.206272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.206303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.206344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.206383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.206424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.206461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.206500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.206536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.206571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.206609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.206645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.206683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.206723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.206765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.206803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.206843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.206886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.206925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.206964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.207004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.207055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.207100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.207142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.207197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.207239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.207280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.207330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.207373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.207416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.207465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.207508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.207549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.207595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.207637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.207678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.207724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.207768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.207816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.207861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.207904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.207946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.207992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.208040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.208089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.208133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.208179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.208224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.208268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.208310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.208352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.208396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.208443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.841 [2024-07-25 01:11:38.208486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.208531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.208573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.208616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.208653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.208691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.208731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.208779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.208820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.208858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.208890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.209381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.209426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.209463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.209499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.209534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.209576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.209614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.209656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.209696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.209736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.209778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.209819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.209857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.209891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.209929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.209969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.210013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.210060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.210112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.210157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.210202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.210245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.210288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.210331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.210375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.210415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.210447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.210486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.210525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.210560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.210595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.210634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.210678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.210719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.210756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.210786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.210829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.210871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.210913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.210954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.210995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.211045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.211085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.211123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.211161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.211200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.211239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.211277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.211318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.211358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.211398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.211445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.211487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.211532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.211575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.211617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.211662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.211702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.211747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.211794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.211834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.211884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.211931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.212432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.212477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.212527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.212569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.212611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.212657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.212698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.842 [2024-07-25 01:11:38.212749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.212791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.212843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.212885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.212929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.212979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.213022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.213070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.213119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.213160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.213198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.213241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.213280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.213318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.213358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.213397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.213440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.213488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.213529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.213561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.213602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.213638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.213674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.213711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.213756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.213801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.213842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.213884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.213923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.213959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.213998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.214037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.214078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.214114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.214152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.214192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.214230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.214271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.214310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.214348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.214387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.214426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.214465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.214503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.214548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.214590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.214632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.214675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.214723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.214765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.214807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.214849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.214894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.214936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.214980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.215025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.215078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.215557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.215608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.215648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.215687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.215723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.215753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.215791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.215830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.215867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.215918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.215959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.215997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.216034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.216078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.216122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.216166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.216206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.216248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.216279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.216320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.216356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.216394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.216428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.216465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.216503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.216542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.216581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.216623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.216663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.216703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.216742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.216781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.216811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.216855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.216897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.216941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.216986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.217026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.217074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.217117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.217161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.843 [2024-07-25 01:11:38.217208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.217255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.217300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.217342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.217396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.217443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.217484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.217529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.217574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.217622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.217669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.217708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.217751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.217797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.217840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.217882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.217924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.217966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.218010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.218055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.218101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.218149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.218632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.218674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.218714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.218751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.218793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.218837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.218874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.218909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.218947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.218982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.219017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.219064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.219104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.219142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.219180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.219219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.219261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.219296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.219339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.219378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.219412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.219442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.219481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.219517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.219556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.219598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.219637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.219674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.219717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.219759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.219807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.219850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.219896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.219951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.219993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.220037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.220095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.220138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.220181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.220230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.220273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.220317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.220361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.220409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.220464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.220507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.220549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.220594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.220638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.220680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.220718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.220760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.220792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.220831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.220869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.220906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.220951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.221000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.221041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.221085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.221126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.221165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.221197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.221232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.221794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.221835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.221878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.221923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.221969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.222011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.222075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.222115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.222160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.222205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.222251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.222294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.844 [2024-07-25 01:11:38.222336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.222387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.222433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:15.845 [2024-07-25 01:11:38.222478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.222523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.222569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.222613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.222659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.222714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.222760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.222806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.222850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.222892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.222935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.222979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.223023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.223069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.223113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.223158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.223201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.223244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.223287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.223328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.223376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.223420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.223465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.223509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.223553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.223595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.223638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.223682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.223726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.223769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.223820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.223862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.223903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.223955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.223997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.224045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.224091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.224134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.224178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.224223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.224263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.224307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.224351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.224391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.224437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.224478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.224518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.224550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.225035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.225074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.225116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.225153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.225193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.225234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.225272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.225311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.225350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.225394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.225437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.225480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.225516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.225557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.225589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.225628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.225666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.225702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.225739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.225776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.225810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.225847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.225888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.225928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.225966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.226001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.226047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.226091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.226132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.226171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.226214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.226256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.226295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.226351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.226398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.226443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.226488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.226531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.226574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.226625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.226668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.226714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.226755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.226798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.226840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.226884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.845 [2024-07-25 01:11:38.226927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.226971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.227017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.227063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.227107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.227148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.227186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.227224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.227253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.227288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.227331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.227376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.227414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.227461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.227503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.227543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.227584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.227623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.228171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.228213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.228248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.228294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.228341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.228386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.228426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.228475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.228523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.228566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.228615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.228658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.228701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.228745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.228792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.228836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.228884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.228931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.228980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.229022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.229067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.229111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.229157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.229201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.229245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.229296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.229339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.229383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.229435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.229481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.229520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.229559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.229589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.229625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.229665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.229702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.229740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.229777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.229820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.229863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.229903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.229943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.229979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.230022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.230072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.230105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.230143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.230182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.230224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.230266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.230301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.230343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.230386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.230423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.230467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:15.846 [2024-07-25 01:11:38.230502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.230544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.230588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.230628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.230667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.230705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.230743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.230779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.231289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.231339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.231381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.231423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.231476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.231523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.231565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.231613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.231656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.231703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.231752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.231792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.231835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.231878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.231919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.231965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.232007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.232055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.232105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.232151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.232197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.232239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.232283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.232325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.232369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.232409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.232446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.232489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.232533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.232571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.232607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.232649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.232683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.232720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.232758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.232799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.232841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.232881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.232918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.232956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.232992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.233031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.233069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.233112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.233147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.233185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.233227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.233264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.233301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.233342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.233382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.233419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.233465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.233511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.233552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.233594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.233637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.233681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.233726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.233770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.233817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.233862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.233903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.233946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.234421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.234468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.234510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.234548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.234586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.234624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.234662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.234698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.234735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.234774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.234813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.234854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.234900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.234941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.234980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.235011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.235053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.235091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.235130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.235171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.235215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.235257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.235297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.128 [2024-07-25 01:11:38.235337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.235374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.235410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.235453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.235498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.235541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.235586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.235630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.235675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.235718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.235760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.235809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.235855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.235898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.235943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.235985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.236030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.236081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.236123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.236166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.236215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.236258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.236300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.236340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.236387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.236430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.236477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.236518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.236559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.236599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.236646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.236686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.236728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.236767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.236800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.236833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.236873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.236910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.236948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.236986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.237550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.237591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.237635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.237677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.237715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.237753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.237786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.237833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.237878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.237923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.237966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.238012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.238060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.238117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.238159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.238203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.238251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.238294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.238338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.238386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.238430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.238473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.238526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.238588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.238632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.238674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.238728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.238770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.238810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.238854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.238893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.238938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.238980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.239025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.239076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.239124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.239173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.239220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.239260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.239305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.239350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.239392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.239440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.239487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.239526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.239566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.239611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.239656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.239702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.239744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.239784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.239829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.239872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.129 [2024-07-25 01:11:38.239916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.239967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.240014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.240057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.240096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.240139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.240186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.240227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.240261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.240292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.240336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.240794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.240837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.240883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.240931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.240971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.241010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.241054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.241097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.241136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.241176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.241207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.241247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.241279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.241319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.241360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.241401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.241440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.241480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.241521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.241564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.241611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.241651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.241697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.241740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.241783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.241829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.241870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.241911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.241953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.241997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.242039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.242096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.242137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.242179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.242219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.242256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.242294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.242332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.242372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.242409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.242448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.242491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.242539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.242575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.242616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.242652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.242692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.242732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.242770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.242807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.242847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.242877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.242919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.242955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.242995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.243035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.243073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.243104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.243133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.243172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.243213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.243252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.243295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.243787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.243840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.243882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.243929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.243977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.244021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.244067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.244114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.244157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.244200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.244248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.244290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.244333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.244376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.244417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.244460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.244508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.244553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.130 [2024-07-25 01:11:38.244592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.244631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.244669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.244707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.244752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.244793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.244833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.244867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.244903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.244941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.244980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.245022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.245062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.245106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.245146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.245183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.245218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.245260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.245303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.245340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.245377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.245418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.245455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.245495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.245538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.245579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.245631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.245671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.245715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.245758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.245804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.245851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.245903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.245945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.245990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.246054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.246102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.246144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.246189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.246234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.246278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.246322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.246370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.246414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.246459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.246511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.247002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.247056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.247102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.247148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.247194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.247233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.247269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.247307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.247349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.247398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.247440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.247479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.247509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.247546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.247583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.247621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.247655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.247692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.247736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.247777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.247814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.247851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.247886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.247925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.247964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.247995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.248034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.248076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.248113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.248155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.248193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.248232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.248274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.248310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.248350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.248393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.248434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.248474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.248516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.248559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.248601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.248650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.248690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.248737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.248778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.248822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.248863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.248909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.248952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.249004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.249053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.249097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.131 [2024-07-25 01:11:38.249134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.249167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.249209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.249249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.249286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.249324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.249360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.249399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.249435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.249478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.249523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.250034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.250080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.250120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.250158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.250200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.250241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.250282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.250328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.250380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.250426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.250470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.250517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.250559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.250604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.250651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.250695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.250738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.250781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.250827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.250870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.250917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.250962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.251017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.251064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.251105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.251147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.251189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.251232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.251275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.251319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.251366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.251412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.251456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.251499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.251542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.251583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.251629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.251671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.251718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.251767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.251815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.251857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.251903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.251944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.251987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.252031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.252076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.252116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.252155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.252194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.252230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.252269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.252316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.252358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.252388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.252427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.252465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.252506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.252542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.252581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.252618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.252660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.252705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.252744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.253314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.253356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.253393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.132 [2024-07-25 01:11:38.253431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.253479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.253522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.253567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.253614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.253657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.253705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.253753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.253796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.253840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.253887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.253932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.253971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.254017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.254067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.254113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.254158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.254202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.254247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.254290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.254335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.254382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.254429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.254473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.254514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.254548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.254586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.254624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.254662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.254700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.254744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.254782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.254822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.254859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.254900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.254936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.254972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.255011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.255061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.255092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.255128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.255165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.255200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.255236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.255273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.255310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.255351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.255390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.255431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.255467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.255505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.255545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.255583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.255629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.255671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.255716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.255760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.255804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.255847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.255894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.256390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.256438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.256489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.256531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.256573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.256622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.256663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.256707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.256750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.256798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.256843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.256889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.256942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.256985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.257030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.257081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.257130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.257169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.257209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.257248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.257279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.257317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.257355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.257394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.257432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.257467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.257505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.257550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.257587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.257625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.257663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.257702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.257741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.257771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.257810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.257849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.257894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.133 [2024-07-25 01:11:38.257935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.257969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.258006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.258050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.258088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.258124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.258165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.258205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.258243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.258284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.258321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.258356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.258398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.258442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.258487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.258531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.258573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.258616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.258660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.258710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.258754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.258798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.258846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.258889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.258935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.258978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.259024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.259503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.259548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.259592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.259633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.259671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.259708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.259747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.259784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.259827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.259875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.259909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.259948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.259985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.260023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.260066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.260107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.260147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.260189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.260228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.260270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.260308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.260346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.260380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.260426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.260471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.260518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.260561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.260605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.260648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.260697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.260740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.260783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.260832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.260875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.260920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.260953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.260988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.261029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.261072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.261109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.261147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.261180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.261216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.261260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.261300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.261335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.261380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.261424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.261468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.261515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.261564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.261606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.261647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.261697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.261745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.261786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.261832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.261875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.261923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.261968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.262012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.262062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.262110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.262583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.262625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.262673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.262715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.262750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.262792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.134 [2024-07-25 01:11:38.262830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.262871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.262916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.262957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.263002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.263052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.263094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.263137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.263173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.263211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.263253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.263296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.263346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.263385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.263431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.263480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.263524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.263570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.263614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.263668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.263711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.263756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.263800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.263845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.263890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.263937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.263980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.264024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.264075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.264118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.264164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.264209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.264252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.264299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.264348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.264394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.264435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.264479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.264524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.264567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.264618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.264661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.264705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.264752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.264798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.264841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.264888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.264931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.264975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.265023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.265075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.265121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.265163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.265207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.265250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.265295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.265342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.265385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.265869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.265911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.265949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.265987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.266024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.266072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.266112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.266152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.266197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.266237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.266275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.266315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.266364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.266406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.266450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.266490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.266532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.266564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.266602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.266642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.266684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.266725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.266765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.266805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.266846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.266886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.266926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.266963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.267001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.267049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.267091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.267141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.267181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.267225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.267279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.267325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.267368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.267413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.267459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.135 [2024-07-25 01:11:38.267506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.267549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.267595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.267651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.267695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.267741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.267787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.267829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.267872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.267915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.267951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.267991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.268032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.268078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.268120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.268160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.268198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.268238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.268280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.268319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.268358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.268395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.268433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.268470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.269048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.269091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.269134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.269186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.269231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.269277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.269319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.269369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.269418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.269460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.269506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.269554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.269599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.269642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.269688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.269740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.269786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.269831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.269876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.269920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.269963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.270006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.270058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.270103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.270148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.270194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.270240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.270285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.270327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.270377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.270419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.270464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.270514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.270555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.270601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.270646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.270691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.270736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.270779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.270821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.270857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.270898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.270942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.270979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.271018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.271061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.271097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.271134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.271173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.271215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.271256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.271298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.271346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.271386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.271427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.271463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.271495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.271532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.271574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.271614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.271652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.271691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.271736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.271780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.272180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.272215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.272244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.272274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.272314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.272353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.272392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.272434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.272478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.272516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.136 [2024-07-25 01:11:38.272555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.272601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.272646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.272692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.272737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:16.137 [2024-07-25 01:11:38.272782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.272832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.272876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.272922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.272967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.273011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.273060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.273103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.273150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.273199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.273251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.273290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.273332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.273374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.273418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.273458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.273495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.273538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.273581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.273614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.273656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.273699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.273748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.273788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.273832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.273874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.273912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.273949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.273987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.274029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.274076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.274116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.274152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.274204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.274246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.274289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.274334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.274380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.274426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.274470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.274525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.274568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.274615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.274660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.274708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.274753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.274801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.274848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.275346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.275394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.275435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.275475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.275521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.275567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.275608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.275647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.275680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.275718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.275759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.275797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.275837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.275877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.275920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.275961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.276000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.276052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.276088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.276128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.276172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.276214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.276257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.276303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.276357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.276407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.137 [2024-07-25 01:11:38.276453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.276500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.276548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.276593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.276637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.276680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.276718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.276749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.276788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.276829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.276866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.276905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.276951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.277000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.277050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.277096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.277148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.277194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.277237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.277283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.277335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.277384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.277429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.277472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.277519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.277563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.277615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.277659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.277702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.277747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.277799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.277841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.277884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.277930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.277981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.278026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.278073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.278124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.278606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.278648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.278696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.278737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.278779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.278814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.278851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.278888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.278925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.278965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.279006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.279049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.279094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.279136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.279173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.279221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.279269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.279313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.279363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.279406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.279445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.279493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.279539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.279583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.279628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.279672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.279722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.279764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.279811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.279855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.279903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.279949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.279996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.280048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.280093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.280139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.280185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.280228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.280271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.280317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.280357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.280391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.280433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.280473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.280511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.280551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.280593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.280631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.280672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.280718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.280759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.280801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.280835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.280876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.280912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.280952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.280995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.281033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.281077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.138 [2024-07-25 01:11:38.281121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.281159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.281196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.281237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.281731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.281778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.281826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.281869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.281912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.281963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.282010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.282060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.282102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.282150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.282195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.282237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.282281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.282332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.282371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.282415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.282455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.282486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.282531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.282573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.282610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.282652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.282697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.282738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.282776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.282813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.282850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.282892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.282930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.282971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.283001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.283046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.283083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.283124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.283161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.283199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.283236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.283277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.283317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.283358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.283398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.283444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.283484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.283524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.283558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.283608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.283655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.283697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.283742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.283792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.283837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.283883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.283928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.283975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.284023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.284072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.284118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.284168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.284207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.284251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.284296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.284339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.284387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.284431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.284907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.284955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.284998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.285045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.285091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.285133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.285170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.285208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.285253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.285283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.285323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.285366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.285404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.285440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.285482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.285524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.285562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.285600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.285638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.285681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.285723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.285759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.285795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.285832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.285875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.285914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.139 [2024-07-25 01:11:38.285948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.285984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.286025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.286070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.286113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.286155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.286195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.286235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.286274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.286314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.286356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.286399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.286446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.286491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.286541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.286590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.286635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.286678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.286724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.286769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.286814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.286859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.286907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.286954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.286998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.287046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.287098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.287144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.287188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.287230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.287261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.287297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.287336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.287376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.287414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.287452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.287492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.287992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.288036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.288080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.288118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.288158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.288192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.288243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.288286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.288334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.288379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.288424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.288473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.288516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.288560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.288603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.288657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.288701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.288745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.288788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.288838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.288883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.288930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.288976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.289030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.289080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.289125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.289172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.289214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.289260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.289306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.289356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.289398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.289443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.289492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.289535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.289576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.289619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.289657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.289689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.289730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.289772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.289810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.289849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.289887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.289931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.289975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.290016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.290060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.290105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.290144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.290183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.290216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.290258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.290302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.290343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.290383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.290419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.290464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.140 [2024-07-25 01:11:38.290505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.290548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.290589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.290632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.290670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.290709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.291220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.291268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.291313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.291365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.291409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.291458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.291502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.291545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.291591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.291640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.291684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.291729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.291770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.291822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.291865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.291913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.291960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.292008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.292055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.292096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.292136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.292177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.292218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.292252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.292290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.292332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.292371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.292415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.292453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.292492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.292535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.292572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.292607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.292655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.292694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.292732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.292768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.292805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.292843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.292885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.292926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.292963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.293003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.293048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.293095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.293138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.293180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.293218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.293259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.293295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.293346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.293387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.293429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.293478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.293518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.293563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.293619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.293667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.293710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.293758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.293803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.293849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.293894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.294385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.294427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.294472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.294521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.294561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.294600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.294640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.294678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.294716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.294757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.294805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.294840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.294878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.294920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.294960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.141 [2024-07-25 01:11:38.295001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.295040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.295086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.295127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.295166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.295207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.295247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.295287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.295320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.295372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.295418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.295465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.295513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.295556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.295607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.295649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.295696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.295747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.295791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.295833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.295877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.295918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.295955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.295994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.296032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.296075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.296116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.296152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.296191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.296229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.296263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.296316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.296361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.296408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.296451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.296503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.296551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.296597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.296644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.296690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.296737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.296780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.296825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.296866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.296915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.296965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.297012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.297061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.297107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.297577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.297618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.297656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.297693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.297732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.297770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.297815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.297857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.297900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.297931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.297967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.298005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.298052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.298094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.298133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.298173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.298212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.298250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.298290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.298331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.298369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.298403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.298456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.298501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.298545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.298598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.298642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.298684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.298732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.298780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.298824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.298869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.298913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.298960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.299003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.299052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.299099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.299155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.299198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.299243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.299290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.299338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.299382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.299427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.299470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.142 [2024-07-25 01:11:38.299518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.299564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.299610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.299654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.299700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.299742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.299788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.299835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.299880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.299926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.299969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.300013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.300062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.300114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.300156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.300197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.300245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.300288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.300755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.300802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.300842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.300880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.300919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.300964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.301002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.301035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.301075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.301113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.301152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.301190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.301235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.301276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.301314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.301352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.301385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.301421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.301456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.301493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.301535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.301574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.301612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.301650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.301687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.301724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.301766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.301814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.301859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.301907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.301961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.302007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.302055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.302101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.302144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.302187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.302238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.302283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.302330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.302375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.302420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.302463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.302509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.302557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.302601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.302640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.302680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.302718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.302750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.302788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.302825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.302867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.302906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.302946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.302987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.303025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.303067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.303113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.303150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.303186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.303227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.303264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.303300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.303340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.303859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.303906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.303954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.304001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.304054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.304099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.304142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.304188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.304234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.304278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.304323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.304368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.304410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.304454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.304498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.143 [2024-07-25 01:11:38.304551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.304596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.304643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.304683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.304719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.304756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.304797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.304834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.304874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.304913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.304951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.304991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.305041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.305086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.305123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.305161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.305200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.305238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.305280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.305325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.305363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.305399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.305443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.305481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.305520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.305562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.305615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.305655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.305693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.305733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.305776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.305820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.305865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.305906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.305955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.305999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.306050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.306094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.306145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.306189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.306235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.306285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.306341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.306385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.306429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.306481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.306527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.306571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.307088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.307140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.307186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.307231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.307272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.307319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.307361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.307405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.307453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.307492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.307530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.307571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.307615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.307655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.307693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.307724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.307766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.307804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.307842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.307881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.307919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.307965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.308006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.308051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.308092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.308134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.308172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.308210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.308240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.308283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.308320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.308357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.308397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.308437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.308473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.308513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.308550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.308589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.308628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.308666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.308706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.308744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.308784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.308820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.308862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.308910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.308958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.309002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.309050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.309101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.144 [2024-07-25 01:11:38.309145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.309188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.309233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.309283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.309324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.309369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.309426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.309472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.309518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.309560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.309592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.309634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.309674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.309711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.310204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.310250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.310290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.310335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.310374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.310413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.310458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.310497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.310541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.310593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.310641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.310685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.310732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.310784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.310825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.310872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.310926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.310972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.311017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.311064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.311108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.311162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.311210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.311259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.311305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.311354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.311397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.311441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.311487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.311533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.311576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.311620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.311671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.311715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.311760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.311806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.311857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.311906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.311944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.311980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.312017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.312057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.312096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.312137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.312183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.312224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.312267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.312307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.312343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.312381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.312428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.312466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.312500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.312537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.312575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.312619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.312661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.312700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.312733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.312773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.312816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.312858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.312898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.313419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.313473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.313517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.313563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.313611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.313654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.313704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.313749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.313794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.313838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.313881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.313927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.313978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.314023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.314070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.314122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.314163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.314211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.314255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.314304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.314347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.145 [2024-07-25 01:11:38.314392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.314432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.314479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.314523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.314569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.314607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.314651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.314692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.314734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.314769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.314808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.314844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.314883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.314922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.314963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.315003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.315045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.315087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.315131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.315173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.315214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.315252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.315284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.315328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.315365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.315404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.315443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.315482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.315520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.315562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.315601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.315643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.315680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.315719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.315754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.315792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.315837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.315886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.315932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.315980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.316030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.316080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.316124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.316603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.316640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.316682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.316724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.316761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.316798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.316838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.316881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.316923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.316958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.316997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.317045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.317083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.317118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.317155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.317197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.317234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.317274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.317318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.317353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.317386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.317429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.317472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.317511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.317548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.317588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.317629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.317669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.317708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.317752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.317798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.317842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.317889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.317938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.317982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.318028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.318077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.318129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.146 [2024-07-25 01:11:38.318173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.318215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.318256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.318306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.318346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.318378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.318417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.318455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.318498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.318542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.318581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.318619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.318658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.318694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.318737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.318780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.318823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.318865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.318909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.318951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.318998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.319050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.319096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.319139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.319196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.319698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.319745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.319787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.319836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.319876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.319920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.319973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.320015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.320063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.320114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.320162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.320207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.320249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.320293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.320335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.320375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.320417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.320458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.320513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.320556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.320602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.320645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.320689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.320735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.320776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.320820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.320872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.320911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.320952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.320991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.321029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.321072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.321107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.321147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.321184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.321220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.321260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.321297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.321336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.321373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.321418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.321456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.321495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.321539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.321579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.321609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.321645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.321681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.321722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.321763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.321800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.321837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.321876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.321917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.321958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.321997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.322033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.322075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.322112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.322147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.322188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.322226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.322264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.322302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.322790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.322836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.322881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.322928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.322971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.323015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.323063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:16.147 [2024-07-25 01:11:38.323113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.147 [2024-07-25 01:11:38.323161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.323204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.323242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.323286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.323329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.323370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.323406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.323441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.323479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.323516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.323555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.323593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.323632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.323671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.323702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.323741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.323781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.323819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.323856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.323891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.323934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.323972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.324001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.324039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.324082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.324120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.324161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.324199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.324238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.324277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.324314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.324358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.324405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.324445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.324489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.324540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.324587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.324626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.324678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.324719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.324763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.324810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.324854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.324898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.324941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.324992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.325034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.325081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.325123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.325166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.325208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.325252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.325287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.325323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.325368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.325937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.325981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.326022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.326067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.326108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.326147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.326187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.326229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.326278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.326322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.326363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.326411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.326464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.326511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.326555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.326599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.326649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.326693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.326735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.326779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.326824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.326869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.326915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.326963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.327006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.327054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.327098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.327141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.327190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.327233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.327274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.327316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.327360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.327418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.327459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.327504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.327548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.327590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.327638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.327687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.327730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.327775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.327818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.148 [2024-07-25 01:11:38.327861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.327907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.327958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.328001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.328047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.328089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.328125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.328163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.328205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.328235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.328274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.328310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.328352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.328390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.328435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.328483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.328524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.328565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.328602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.328640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.328677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 01:11:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:16.149 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:16.149 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:16.149 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:16.149 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:16.149 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:16.149 [2024-07-25 01:11:38.529549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.529609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.529655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.529691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.529727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.529761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.529798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.529836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.529881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.529921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.529965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.530005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.530035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.530074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.530112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.530152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.530192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.530237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.530276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.530315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.530353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.530387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.530441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.530483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.530525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.530571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.530615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.530657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.530704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.530748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.530790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.530835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.530875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.530917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.530963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.531002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.531045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.531093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.531136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.531178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.531220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.531263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.531306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.531355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.531395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.531435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.531488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.531530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.531574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.531618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.531662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.531708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.531752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.531794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.531839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.531884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.531928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.531975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.532013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.532056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.532097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.532140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.532181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.532218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.532740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.532779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.532813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.532849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.532885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.532923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.532964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.533006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.149 [2024-07-25 01:11:38.533057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.533101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.533143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.533186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.533239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.533284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.533328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.533373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.533417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.533447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.533489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.533528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.533570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.533615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.533659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.533697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.533737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.533776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.533814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.533852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.533892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.533933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.533967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.534010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.534054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.534100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.534145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.534193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.534238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.534281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.534329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.534373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.534414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.534461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.534501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.534545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.534583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.534618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.534654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.534695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.534734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.534771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.534810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.534848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.534884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.534924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.534970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.535007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.535049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.535090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.535130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.535173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.535218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.535262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.535306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.535818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.535865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.535907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.535950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.535994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.536035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.536081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.536127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.536172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.536221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.536262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.536299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.536338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.536376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.536419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.536454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.536484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.536521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.536561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.536600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.536638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.536674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.536710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.536752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.536785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.536822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.536861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.536902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.536941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.536982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.537021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.537064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.537106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.537147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.150 [2024-07-25 01:11:38.537199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.537241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.537283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.537329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.537372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.537414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.537463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.537512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.537556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.537607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.537655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.537698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.537748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.537793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.537836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.537882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.537929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.537971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.538017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.538062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.538107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.538165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.538206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.538247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.538289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.538329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.538371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.538415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.538458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.538502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.538983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.539024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.539068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.539114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.539159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.539197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.539229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.539265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.539305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.539346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.539390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.539429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.539467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.539505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.539549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.539589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.539630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.539666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.539709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.539752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.539796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.539841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.539885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.539927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.539969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.540012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.540059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.540102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.540144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.540187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.540229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.540261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.540305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.540344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.540384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.540432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.540481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.540523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.540562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.540602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.540635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.540672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.540709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.540745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.540783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.540822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.540866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.540910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.540951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.540998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.541045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.541093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.151 [2024-07-25 01:11:38.541136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.541182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.541224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.541269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.541311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.541348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.541393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.541438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.541479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.541516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.541554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.542148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.542193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.542239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.542279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.542325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.542374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.542414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.542458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.542511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.542553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.542596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.542639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.542683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.542726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.542767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.542814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.542859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.542910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.542954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.542998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.543050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.543092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.543140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.543182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.543223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.543271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.543314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.543358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.543399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.543440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.543484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.543529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.543573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.543615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.543659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.543703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.543749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.543792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.543832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.543875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.543915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.543956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.543994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.544038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.544082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.544113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.544151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.544188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.544226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.544266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.544304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.544344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.544379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.544423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.544467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.544507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.544547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.544589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.544619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.544655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.544692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.544733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.544768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.544803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.545335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.545385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.545429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.545474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.545520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.545564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.545609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.545655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.545697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.545740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.545788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.545831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.545873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.152 [2024-07-25 01:11:38.545920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.545964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.546006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 Message suppressed 999 times: [2024-07-25 01:11:38.546051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 Read completed with error (sct=0, sc=15) 00:11:16.153 [2024-07-25 01:11:38.546099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.546145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.546188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.546238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.546284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.546334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.546382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.546427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.546474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.546517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.546556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.546595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.546626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.546664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.546704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.546748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.546792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.546831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.546869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.546908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.546948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.546985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.547022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.547064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.547108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.547142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.547179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.547217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.547257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.547296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.547338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.547375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.547417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.547456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.547497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.547537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.547576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.547613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.547646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.547686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.547728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.547779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.547825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.547869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.547912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.547962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.548471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.548520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.548563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.548606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.548648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.548695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.548737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.548781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.548826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.548874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.548906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.548949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.548987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.549025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.549071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.549115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.549156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.549193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.549230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.549270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.549307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.549344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.549389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.549420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.549457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.549495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.549528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.549566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.549604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.549641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.549682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.549720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.549761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.549806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.549846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.549885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.549925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.549963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.550003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.550038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.550086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.550141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.550186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.550230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.550278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.550323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.550366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.153 [2024-07-25 01:11:38.550413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.550454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.550497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.550543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.550588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.550630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.550680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.550723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.550771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.550816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.550861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.550901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.550950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.550995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.551036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.551085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.551128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.551616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.551658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.551702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.551742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.551778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.551817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.551853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.551889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.551920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.551959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.551997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.552037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.552079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.552126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.552168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.552208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.552242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.552282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.552323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.552365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.552404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.552443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.552484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.552525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.552566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.552603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.552640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.552680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.552715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.552756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.552798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.552853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.552895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.552938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.552983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.553026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.553075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.553119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.553164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.553211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.553249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.553287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.553324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.553362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.553393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.553431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.553469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.553516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.553559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.553600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.553642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.553682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.553729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.553772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.553818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.553866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.553912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.553954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.554000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.554041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.554090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.554142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.554185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.554689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.554736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.554779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.554822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.554873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.554916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.554958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.555001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.555038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.555083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.555120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.555159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.555197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.555229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.555266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.555308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.555346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.154 [2024-07-25 01:11:38.555388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.555427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.555465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.555502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.555541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.555579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.555618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.555657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.555695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.555730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.555765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.555803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.555842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.555882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.555920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.555962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.555997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.556038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.556084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.556121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.556157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.556193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.556237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.556283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.556326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.556370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.556418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.556464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.556512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.556556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.556600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.556645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.556688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.556730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.556774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.556822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.556867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.556913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.556969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.557015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.557064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.557112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.557156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.557199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.557244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.557294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.557336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.557847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.557891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.557934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 01:11:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:11:16.155 [2024-07-25 01:11:38.557973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.558004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.558041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.558081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.558125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.558163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.558201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.558239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.558277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.558315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 01:11:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:11:16.155 [2024-07-25 01:11:38.558353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.558396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.558441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.558480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.558511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.558548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.558583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.558620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.558659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.558696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.558734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.558769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.558809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.558846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.558885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.558926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.558960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.155 [2024-07-25 01:11:38.558998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.559037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.559080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.559120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.559157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.559200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.559242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.559288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.559330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.559372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.559414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.559457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.559500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.559542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.559589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.559631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.559682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.559724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.559763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.559812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.559858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.559899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.559942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.559991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.560035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.560086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.560142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.560179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.560221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.560261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.560299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.560336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.560372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.560857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.560897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.560934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.560974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.561012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.561057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.561097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.561145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.561188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.561232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.561277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.561322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.561365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.561407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.561451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.561493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.561533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.561580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.561625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.561660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.561695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.561735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.561771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.561809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.561848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.561889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.561925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.561966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.562009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.562048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.562089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.562128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.562163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.562202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.562246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.562288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.562338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.562381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.562422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.562468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.562509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.562553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.562598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.562646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.562688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.562733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.562775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.562819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.562871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.562913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.562959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.562998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.563035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.563078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.563108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.563149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.563189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.563230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.563277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.563318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.563355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.563394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.563431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.563470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.563963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.156 [2024-07-25 01:11:38.564014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.564066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.564109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.564155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.564198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.564245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.564292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.564336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.564387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.564434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.564475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.564520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.564563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.564608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.564650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.564698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.564737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.564781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.564823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.564866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.564916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.564956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.565000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.565036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.565074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.565117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.565159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.565202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.565243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.565276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.565317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.565360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.565401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.565439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.565475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.565512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.565551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.565590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.565629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.565665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.565696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.565735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.565771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.565814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.565853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.565893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.565928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.565963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.566001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.566041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.566081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.566121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.566160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.566195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.566238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.566281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.566326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.566371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.566416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.566458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.566504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.566554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.567053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.567096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.567140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.567183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.567225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.567268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.567312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.567354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.567398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.567436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.567477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.567519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.567565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.567605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.567649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.567697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.567730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.567769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.567806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.567851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.567897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.567938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.567978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.568015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.568057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.568105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.568147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.568185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.568221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.568251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.568288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.568327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.157 [2024-07-25 01:11:38.568367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.568403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.568442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.568479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.568510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.568545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.568581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.568618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.568653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.568691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.568737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.568779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.568818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.568857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.568898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.568949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.568995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.569035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.569081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.569127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.569179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.569220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.569263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.569316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.569358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.569399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.569442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.569484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.569533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.569575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.569618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.569671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.570183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.570234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.570276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.570312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.570355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.570399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.570444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.570473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.570513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.570555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.570592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.570628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.570667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.570705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.570743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.570778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.570816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.570854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.570898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.570947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.570978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.571019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.571060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.571098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.571138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.571178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.571217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.571254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.571283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.571325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.571361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.571399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.571438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.571481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.571522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.571560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.571602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.571645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.571687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.571728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.571777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.571821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.571863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.571906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.571956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.571999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.572050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.572099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.572143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.572184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.572233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.572274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.572316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.158 [2024-07-25 01:11:38.572358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.572401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.572440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.572480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.572521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.572557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.572598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.572638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.572671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.572707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.573213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.573258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.573296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.573351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.573390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.573433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.573479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.573522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.573563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.573607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.573650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.573691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.573734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.573774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.573817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.573859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.573899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.573947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.573986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.574033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.574083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.574125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.574170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.574221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.574256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.574294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.574331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.574367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.574405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.574434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.574477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.574516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.574552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.574590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.574629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.574673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.574719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.574759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.574798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.574836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.574873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.574912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.574952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.574991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.575032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.575073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.575115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.575159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.575199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.575249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.575293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.575335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.575382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.575432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.575480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.575526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.575575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.575615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.575658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.575710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.575753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.575796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.575843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.575880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.576374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.576420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.576459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.576500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.576538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.576575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.576617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.576661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.576704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.576751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.576793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.576836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.576890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.576930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.576970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.577012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.577059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.577103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.577147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.577190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.577236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.577276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.159 [2024-07-25 01:11:38.577320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.577378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.577421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.577466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.577512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.577555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.577597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.577634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.577671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.577709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.577745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.577783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.577819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.577856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.577892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.577931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.577978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.578020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.578068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.578108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.578140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.578177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.578217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.578255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.578294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.578338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.578374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.578411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.578450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.578490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.578531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.578570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.578610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.578659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.578704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.578747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.578793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.578840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.578883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.578928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.578977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.579472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.579521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.579574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.579619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.579665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.579718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.579762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.579809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.579861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.579902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.579946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.579993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.580036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.580084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.580134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.580178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.580223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.580280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.580321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.580367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.580409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.580450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.580488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.580527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.580564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.580594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.580635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.580670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.580706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.580742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.580781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.580822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.580857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.580893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.580933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.580968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.581005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.581041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.581085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.160 [2024-07-25 01:11:38.581119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.581159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.581196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.581232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.581271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.581310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.581346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.581383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.581422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.581452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.581487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.581525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.581561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.581605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.581651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.581692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.581732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.581772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.581808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.581847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.581886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.581928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.581969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.582021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.582069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.582580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.582627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.582672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.582714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.582757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.582801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.582848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.582894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.582939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.582990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.583029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.583074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.583113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.583155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.583194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.583233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.583283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.583327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.583367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.583404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.583445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.583483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.583518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.583554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.583584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.583622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.583659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.583696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.583735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.583772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.583807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.583846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.583888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.583923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.583961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.584000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.584038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.584079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.584116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.584156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.584198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.584243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.584289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.584334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.584383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.584422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.584468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.584516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.584558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.584602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.584647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.584691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.584736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.584782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.584822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.584870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.584915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.584960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.585003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.585051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.585095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.585140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.585185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.585667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.585718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.585761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.585802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.585847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.585890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.585930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.585966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.161 [2024-07-25 01:11:38.586010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.586057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.586097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.586127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.586168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.586206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.586252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.586289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.586331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.586370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.586405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.586442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.586481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.586519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.586561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.586605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.586643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.586678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.586720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.586766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.586804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.586843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.586879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.586923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.586963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.587004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.587040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.587082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.587119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.587156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.587193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.587233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.587273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.587313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.587357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.587400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.587447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.587496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.587543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.587588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.587632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.587681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.587729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.587769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.587810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.587849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.587879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.587916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.587959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.587997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.588033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.588073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.588117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.588160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.588201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.588237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.588741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.588787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.588822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.588865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.588906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.588942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.588981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.589019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.589072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.589116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.589158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.589200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.589244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.589287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.589345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.589389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.589432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.589488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.589530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.589574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.589619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.589661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.589706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.589751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.589797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.589842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.589890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.589933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.589974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.590020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.590065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.590106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.590152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.590196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.590239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.590288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.590330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.590371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.162 [2024-07-25 01:11:38.590416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.590458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.590501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.590544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.590590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.590638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.590680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.590720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.590761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.590804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.590848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.590886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.590925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.590965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.590995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.591033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.591073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.591112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.591150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.591195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.591241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.591279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.591317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.591362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.591401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.591935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.591976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.592014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.592057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.592099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.592140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.592191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.592234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.592279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.592325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.592369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.592415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.592458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.592502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.592547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.592591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.592643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.592684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.592727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.592780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.592824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.592865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.592908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.592948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.592993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.593037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.593087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.593134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.593184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.593228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.593271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.593312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.593354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.593400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.593444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.593487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.593529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.593573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.593618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.593665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.593711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.593757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.593800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.593847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.593891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.593931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.593973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.594011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.594060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.594098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.594138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.594168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.594207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.594242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.594280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.594318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.594357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.594395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.594432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.594471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.594513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.594559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.594605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.594645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.595162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.595207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.595244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.595281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.595320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.163 [2024-07-25 01:11:38.595364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.595411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.595456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.595501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.595546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.595595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.595639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:16.164 [2024-07-25 01:11:38.595683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.595731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.595773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.595815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.595860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.595914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.595955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.595985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.596023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.596065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.596101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.596140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.596180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.596221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.596270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.596312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.596349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.596386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.596420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.596456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.596497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.596532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.596569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.596612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.596648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.596684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.596723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.596764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.596800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.596842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.596880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.596919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.596956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.596996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.597046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.597087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.597131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.597178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.597221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.597277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.597321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.597362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.597411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.597454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.597500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.597545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.597590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.597637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.597682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.597727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.597771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.598262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.598310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.598352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.598398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.598441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.598483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.598528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.598576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.598618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.598659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.598699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.598736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.598780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.598828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.598868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.598911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.598951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.598989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.599028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.599064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.599104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.599141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.599180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.599223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.599260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.599297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.599334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.599372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.599413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.599457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.599497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.599534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.599573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.599615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.599653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.599696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.599739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.599774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.599818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.164 [2024-07-25 01:11:38.599853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.165 [2024-07-25 01:11:38.599896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.165 [2024-07-25 01:11:38.599934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.165 [2024-07-25 01:11:38.599972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.165 [2024-07-25 01:11:38.600014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.165 [2024-07-25 01:11:38.600059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.165 [2024-07-25 01:11:38.600102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.165 [2024-07-25 01:11:38.600146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.165 [2024-07-25 01:11:38.600201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.165 [2024-07-25 01:11:38.600243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.165 [2024-07-25 01:11:38.600285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.165 [2024-07-25 01:11:38.600331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.165 [2024-07-25 01:11:38.600376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.165 [2024-07-25 01:11:38.600424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.165 [2024-07-25 01:11:38.600468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.165 [2024-07-25 01:11:38.600516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.165 [2024-07-25 01:11:38.600555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.165 [2024-07-25 01:11:38.600597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.165 [2024-07-25 01:11:38.600643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.165 [2024-07-25 01:11:38.600687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.600732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.600781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.600822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.600863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.600908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.601397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.601439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.601480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.601525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.601566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.601603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.601640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.601682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.601722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.601758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.601795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.601833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.601877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.601915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.601955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.601993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.602030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.602074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.602117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.602159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.602199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.602240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.602281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.602317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.602360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.602401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.602446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.602495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.602536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.602577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.602624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.602669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.602714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.602756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.602795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.602834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.602870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.602910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.602950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.602981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.603016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.603057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.603096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.603132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.603172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.603212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.603251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.603285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.603330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.603372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.603413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.603462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.603503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.603550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.603595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.603639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.445 [2024-07-25 01:11:38.603683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.603726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.603772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.603817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.603860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.603903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.603949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.604419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.604462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.604500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.604538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.604573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.604612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.604655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.604690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.604731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.604775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.604805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.604843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.604882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.604920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.604959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.604999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.605038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.605084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.605125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.605163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.605207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.605246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.605281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.605326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.605371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.605419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.605463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.605510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.605556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.605602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.605647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.605691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.605735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.605778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.605826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.605868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.605912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.605958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.606000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.606041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.606088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.606129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.606175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.606220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.606272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.606314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.606356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.606408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.606449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.606492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.606537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.606580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.606625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.606672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.606715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.606758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.606806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.606853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.606893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.606928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.606965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.607003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.607040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.607089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.607582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.607624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.607662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.607703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.607745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.607792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.607837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.607882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.607927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.607971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.608010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.608060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.608105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.608148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.608192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.608234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.446 [2024-07-25 01:11:38.608278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.608326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.608370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.608417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.608455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.608495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.608533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.608574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.608605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.608650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.608687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.608723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.608762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.608801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.608840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.608878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.608917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.608956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.608996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.609035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.609077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.609122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.609164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.609214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.609258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.609299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.609344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.609400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.609442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.609490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.609535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.609581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.609624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.609668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.609710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.609755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.609801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.609840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.609880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.609911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.609950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.609985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.610025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.610072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.610112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.610151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.610188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.610740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.610783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.610824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.610862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.610903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.610944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.610985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.611029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.611080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.611124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.611166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.611209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.611252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.611294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.611344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.611388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.611429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.611483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.611524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.611570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.611616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.611665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.611708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.611753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.611796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.611841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.611885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.611934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.611978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.612025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.612073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.612120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.612161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.612205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.612249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.612291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.612339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.612388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.612431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.612479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.612525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.612566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.612610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.612661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.612703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.612745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.612798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.447 [2024-07-25 01:11:38.612841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.612886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.612926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.612975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.613013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.613052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.613093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.613134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.613171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.613212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.613248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.613285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.613324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.613369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.613413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.613452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.613491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.613968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.614010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.614053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.614086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.614124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.614159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.614201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.614236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.614277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.614312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.614349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.614392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.614429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.614470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.614513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.614556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.614601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.614643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.614686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.614729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.614776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.614823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.614868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.614910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.614956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.615002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.615048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.615090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.615136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.615182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.615221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.615262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.615306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.615353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.615390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.615423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.615463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.615501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.615548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.615591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.615630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.615666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.615705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.615741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.615778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.615817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.615856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.615893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.615934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.615970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.616008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.616049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.616080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.616118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.616157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.616203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.616245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.616284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.616322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.616362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.616399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.616439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.616482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.616988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.617035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.617084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.617131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.617175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.617217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.617257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.617300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.617345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.617390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.617434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.617477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.617516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.617555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.617599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.617639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.617679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.448 [2024-07-25 01:11:38.617716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.617747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.617788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.617825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.617863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.617901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.617948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.617991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.618031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.618074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.618109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.618149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.618187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.618225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.618268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.618305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.618347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.618387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.618427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.618468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.618507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.618543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.618585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.618624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.618667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.618718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.618762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.618807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.618852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.618896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.618937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.618983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.619037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.619083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.619125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.619173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.619216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.619262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.619303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.619347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.619389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.619436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.619476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.619520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.619559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.619605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.619648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.620142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.620186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.620236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.620280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.620323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.620369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.620418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.620463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.620506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.620550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.620600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.620643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.620689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.620734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.620773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.620818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.620864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.620906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.620950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.621000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.621041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.621087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.621125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.621163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.621199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.621235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.621277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.621312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.621349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.621387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.621428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.621467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.621509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.621552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.621589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.621627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.621666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.621703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.621740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.621784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.621820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.621856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.621894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.621930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.621970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.449 [2024-07-25 01:11:38.622014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.622057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.622100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.622140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.622179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.622210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.622247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.622285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.622325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.622367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.622411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.622448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.622486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.622526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.622570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.622618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.622670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.622715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.623210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.623256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.623294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.623342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.623385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.623424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.623464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.623506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.623542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.623582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.623625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.623665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.623694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.623731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.623772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.623809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.623848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.623890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.623970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.624008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.624050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.624131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.624172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.624221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.624268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.624311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.624356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.624404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.624447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.624492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.624532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.624577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.624619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.624658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.624694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.624735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.624769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.624811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.624850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.624896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.624940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.624986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.625038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.625084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.625130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.625177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.625215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.625259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.625306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.625348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.625388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.625435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.625486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.625528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.625570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.625621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.450 [2024-07-25 01:11:38.625659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.625700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.625735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.625774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.625818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.625859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.625900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.625935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.626517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.626565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.626609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.626652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.626698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.626742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.626788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.626834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.626880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.626927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.626973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.627022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.627074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.627119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.627165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.627211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.627254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.627301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.627345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.627389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.627436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.627488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.627529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.627571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.627616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.627659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.627704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.627749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.627790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.627836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.627880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.627922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.627968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.628011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.628056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.628099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.628142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.628190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.628232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.628275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.628319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.628365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.628408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.628451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.628502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.628542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.628586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.628630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.628671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.628715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.628755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.628795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.628839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.628876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.628916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.628955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.628999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.629033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.629074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.629114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.629149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.629189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.629228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.629679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.629720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.629756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.629792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.629834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.629876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.629910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.629951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.629992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.630036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.630082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.630125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.630166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.630209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.630245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.630289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.630329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.630372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.451 [2024-07-25 01:11:38.630416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.630460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.630507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.630555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.630599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.630641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.630685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.630723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.630758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.630798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.630835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.630875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.630918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.630954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.630993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.631028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.631067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.631105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.631142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.631180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.631219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.631256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.631293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.631327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.631357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.631385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.631415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.631445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.631474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.631503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.631532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.631560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.631588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.631616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.631646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.631674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.631703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.631741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.631788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.631822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.631852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.631889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.631929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.631970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.632014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.632053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.632550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.632596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.632639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.632685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.632727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.632770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.632818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.632853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.632895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.632932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.632969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.633011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.633052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.633084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.633122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.633162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.633200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.633242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.633281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.633320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.633358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.633396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.633441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.633482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.633520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.633562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.633600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.633643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.633685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.633730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.633773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.633817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.633861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.633905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.633955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.634000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.634041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.634084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.634134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.634175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.634218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.452 [2024-07-25 01:11:38.634262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.634303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.634355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.634398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.634442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.634489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.634535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.634580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.634627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.634665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.634707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.634752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.634792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.634840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.634882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.634930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.634977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.635019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.635062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.635106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.635153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.635197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.635656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.635697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.635743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.635786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.635825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.635864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.635903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.635949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.635990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.636028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.636078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.636114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.636155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.636204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.636246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.636286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.636325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.636364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.636403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.636444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.636484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.636532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.636576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.636621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.636673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.636719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.636763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.636807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.636848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.636893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.636939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.636980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.637025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.637066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.637109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.637149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.637180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.637222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.637257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.637296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.637335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.637369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.637411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.637449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.637488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.637529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.637566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.637609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.637647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.637684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.637725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.637771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.637815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.637858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.637911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.637952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.637998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.638054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.638099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.638141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.638185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.638218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.638251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.638296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.638808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.638857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.638902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.638949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.638993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.639036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.639082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.639126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.639172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.639222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.639268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.453 [2024-07-25 01:11:38.639314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.639363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.639406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.639448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.639491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.639534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.639572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.639610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.639647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.639692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.639735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.639772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.639803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.639841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.639878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.639918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.639952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.639990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.640029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.640072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.640110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.640150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.640189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.640227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.640263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.640297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.640345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.640389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.640431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.640474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.640518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.640565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.640615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.640658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.640703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.640746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.640798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.640838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.640881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.640928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.640972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.641014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.641066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.641114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.641157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.641202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.641243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.641285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.641329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.641374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.641420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.641462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.641957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.642002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.642048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.642083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.642121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.642159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.642198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.642239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.642277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.642316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.642357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.642392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.642430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.642476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.642523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.642557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.642597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.642633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.642676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.642713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.642754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.642792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.642831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.642872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.642914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.642952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.642991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.643032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.643072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.643114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.643160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.643204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.454 [2024-07-25 01:11:38.643248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.643292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.643343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.643389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.643434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.643479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.643526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.643570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.643611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.643658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.643707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.643749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.643796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.643845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.643887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.643930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.643978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.644022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.644069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.644119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.644160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.644203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.644249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.644290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.644330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.644369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.644408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.644441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.644478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.644516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.644555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.644593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.645069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.645117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.645154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.645191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.645227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.645266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.645310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:16.455 [2024-07-25 01:11:38.645353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.645393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.645430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.645472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.645517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.645564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.645607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.645651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.645698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.645746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.645789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.645835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.645887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.645926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.645971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.646021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.646067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.646115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.646160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.646205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.646250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.646293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.646334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.646386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.646430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.646472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.646525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.646565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.646607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.646646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.646686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.646723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.646760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.646802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.646839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.646878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.646916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.646956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.646997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.647034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.647074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.647118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.455 [2024-07-25 01:11:38.647160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.647197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.647245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.647277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.647315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.647352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.647393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.647425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.647465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.647504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.647547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.647592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.647630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.647673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.648149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.648196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.648241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.648289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.648330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.648376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.648418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.648460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.648508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.648557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.648596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.648636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.648693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.648734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.648780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.648822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.648865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.648909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.648960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.649005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.649048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.649091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.649132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.649173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.649203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.649241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.649281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.649317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.649357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.649399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.649443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.649485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.649522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.649564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.649601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.649645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.649688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.649719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.649755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.649793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.649832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.649867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.649905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.649944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.649983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.650027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.650072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.650115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.650157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.650196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.650239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.650284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.650331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.650385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.650428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.650474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.650521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.650569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.650612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.650657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.650705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.650747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.650791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.650834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.651351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.651397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.651435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.651480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.651514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.651550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.651591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.651629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.651666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.651704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.651740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.456 [2024-07-25 01:11:38.651775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.651814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.651856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.651896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.651933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.651973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.652007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.652045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.652089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.652130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.652171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.652208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.652246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.652287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.652327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.652364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.652406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.652443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.652483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.652522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.652564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.652607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.652648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.652694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.652737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.652794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.652835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.652878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.652932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.652975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.653017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.653062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.653105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.653154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.653198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.653240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.653283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.653326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.653370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.653413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.653462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.653509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.653554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.653603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.653643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.653683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.653730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.653772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.653812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.653852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.653899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.653935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.654434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.654468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.654505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.654543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.654581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.654615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.654655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.654694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.654735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.654775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.654814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.654852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.654895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.654934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.654972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.655013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.655056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.655103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.655145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.655191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.655238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.655280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.655329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.655373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.655416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.655472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.655514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.655559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.655612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.655654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.655702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.655748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.655792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.655833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.655876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.655930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.655975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.656023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.656067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.656111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.656159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.457 [2024-07-25 01:11:38.656203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.656243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.656280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.656321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.656352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.656393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.656433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.656470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.656509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.656548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.656584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.656624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.656663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.656702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.656745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.656785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.656824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.656857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.656897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.656939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.656978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.657014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.657055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.657601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.657648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.657692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.657735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.657779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.657823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.657869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.657912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.657954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.658003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.658050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.658092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.658139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.658194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.658239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.658283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.658327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.658380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.658425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.658469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.658516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.658569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.658612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.658652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.658697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.658727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.658765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.658809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.658846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.658881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.658919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.658962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.659004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.659047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.659088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.659133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.659170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.659209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.659239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.659281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.659315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.659354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.659391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.659426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.659462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.659501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.659540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.659581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.659621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.659661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.659699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.659737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.659782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.659829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.659871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.659914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.659960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.660009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.660055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.660100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.660144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.660186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.660229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.660721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.660772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.660823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.660866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.660908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.660955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.660996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.661039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.661087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.661128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.661163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.661199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.458 [2024-07-25 01:11:38.661242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.661280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.661317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.661356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.661393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.661433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.661480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.661518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.661558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.661600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.661644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.661684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.661728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.661766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.661807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.661842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.661877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.661916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.661955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.661995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.662036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.662080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.662121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.662159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.662203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.662240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.662282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.662334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.662376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.662422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.662467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.662513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.662556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.662605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.662649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.662697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.662746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.662795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.662840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.662888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.662930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.662978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.663020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.663065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.663109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.663151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.663195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.663247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.663287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.663330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.663374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.663412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.663898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.663942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.663985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.664025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.664061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.664100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.664139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.664177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.664215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.664251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.664288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.664330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.664372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.664414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.664451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.664489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.664524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.664574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.664619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.664665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.664708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.664751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.664796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.664842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.664885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.664930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.664977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.665020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.665073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.665120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.665163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.459 [2024-07-25 01:11:38.665208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.665256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.665303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.665348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.665395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.665439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.665493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.665539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.665582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.665628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.665677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.665724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.665766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.665805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.665849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.665881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.665921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.665963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.666001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.666045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.666095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.666134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.666173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.666217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.666255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.666295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.666337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.666374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.666409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.666453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.666495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.666542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.667115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.667169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.667213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.667262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.667304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.667347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.667389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.667439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.667483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.667528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.667579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.667620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.667663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.667707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.667751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.667795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.667841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.667891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.667935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.667978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.668021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.668066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.668109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.668156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.668201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.668245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.668287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.668324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.668369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.668407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.668446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.668486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.668526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.668566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.668606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.668651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.668691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.668732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.668771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.668815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.668861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.668901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.668937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.668972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.669015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.669056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.669091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.669129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.669169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.669208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.669248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.669289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.669329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.669369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.669410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.669450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.460 [2024-07-25 01:11:38.669491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.669544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.669588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.669634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.669680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.669727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.669770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.669819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.670327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.670380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.670425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.670473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.670514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.670559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.670609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.670649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.670683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.670717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.670756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.670797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.670836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.670882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.670922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.670962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.670998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.671041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.671081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.671119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.671163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.671196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.671237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.671279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.671312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.671350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.671391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.671433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.671470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.671509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.671550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.671595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.671636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.671675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.671714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.671760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.671803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.671847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.671892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.671954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.671996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.672046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.672100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.672144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.672187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.672234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.672276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.672320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.672364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.672414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.672456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.672502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.672550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.672596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.672644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.672689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.672740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.672783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.672827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.672872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.672922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.672966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.673012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.673496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.673542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.673588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.673634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.673673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.673707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.673750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.673787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.673827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.673866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.673901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.673941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.673980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.674017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.674062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.674100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.674140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.674183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.674219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.674255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.674304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.674355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.461 [2024-07-25 01:11:38.674399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.674445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.674491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.674535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.674576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.674622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.674668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.674715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.674762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.674807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.674847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.674891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.674937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.674996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.675038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.675085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.675128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.675173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.675217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.675260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.675311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.675356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.675399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.675444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.675492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.675530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.675561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.675604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.675645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.675684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.675724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.675763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.675803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.675850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.675890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.675932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.675969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.676005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.676047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.676079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.676116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.676157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.676733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.676780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.676830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.676875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.676919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.676965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.677011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.677061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.677107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.677154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.677200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.677243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.677285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.677328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.677375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.677422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.677471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.677518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.677560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.677604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.677655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.677700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.677743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.677788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.677830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.677876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.677924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.677958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.677991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.678033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.678080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.678127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.678168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.678208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.678243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.678284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.678325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.678365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.678403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.678451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.678486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.678519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.678557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.678596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.678631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.678669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.678708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.678750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.678786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.678827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.678866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.678913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.678952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.678992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.679028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.679072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.462 [2024-07-25 01:11:38.679119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.679162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.679207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.679260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.679304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.679348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.679392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.679887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.679935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.679988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.680031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.680080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.680123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.680174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.680218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.680258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.680307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.680346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.680385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.680423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.680453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.680492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.680534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.680570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.680609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.680650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.680688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.680732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.680771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.680808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.680847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.680889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.680927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.680958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.680997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.681033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.681074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.681109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.681149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.681189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.681229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.681268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.681312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.681351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.681391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.681427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.681468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.681507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.681552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.681595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.681643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.681688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.681728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.681779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.681824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.681868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.681913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.681972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.682016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.682064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.682114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.682154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.682199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.682246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.682290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.682335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.682380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.682427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.682469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.682511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.682560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.683057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.683104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.683148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.683187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.683225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.683269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.683306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.683350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.683384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.683428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.683464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.683503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.683545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.683586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.683626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.683665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.683706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.683744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.683791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.683837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.683880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.683924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.683969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.684017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.684064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.684111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.463 [2024-07-25 01:11:38.684158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.684206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.684248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.684291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.684333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.684372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.684406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.684449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.684490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.684527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.684565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.684604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.684643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.684682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.684723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.684765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.684808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.684858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.684902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.684944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.684994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.685038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.685087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.685131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.685177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.685222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.685267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.685316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.685354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.685400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.685447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.685490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.685533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.685577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.685621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.685666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.685709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.686187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.686234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.686273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.686311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.686352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.686393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.686431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.686473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.686518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.686560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.686607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.686650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.686698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.686744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.686791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.686836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.686878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.686920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.686968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.687010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.687063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.687108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.687151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.687193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.687239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.687284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.687328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.687376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.687419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.687461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.687503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.687546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.687589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.687631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.687673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.687712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.687754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.687792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.687835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.687874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.687910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.687947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.464 [2024-07-25 01:11:38.687988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.688031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.688070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.688111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.688148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.688190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.688237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.688279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.688317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.688359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.688397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.688438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.688476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.688513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.688552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.688594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.688636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.688675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.688714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.688752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.688797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.688843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.689342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.689390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.689436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.689479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.689526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.689571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.689618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.689657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.689698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.689728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.689766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.689809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.689854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.689900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.689939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.689977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.690022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.690067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.690107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.690150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.690191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.690223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.690261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.690302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.690343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.690386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.690427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.690470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.690516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.690562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.690609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.690656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.690701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.690740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.690791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.690834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.690874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.690914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.690959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.690998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.691038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.691080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.691111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.691150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.691188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.691226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.691266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.691305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.691346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.691386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.691426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.691461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.691501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.691540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.691595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.691641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.691687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.691746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.691789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.691833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.691876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.691926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.691967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.692451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.692500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.692544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.692589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.692635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.692686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.692732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.692776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.692821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.692872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.692915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.692959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.465 [2024-07-25 01:11:38.693003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.693047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.693093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.693135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.693175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.693216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.693248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.693288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.693327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.693367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.693405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.693444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.693486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.693525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.693563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.693601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.693641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.693686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.693723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.693760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.693799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.693839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.693878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.693914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.693954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.693991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.694028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.694069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.694110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.694150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.694190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.694234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.694273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.694318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.694359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.694408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.694454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.694504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.694550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.694594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.694640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.694687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.694727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.694774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.694823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.694867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.694912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.694960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.695004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.695047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.695098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.695145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:16.466 [2024-07-25 01:11:38.695639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.695682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.695723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.695766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.695796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.695838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.695873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.695909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.695947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.695985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.696029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.696075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.696114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.696152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.696188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.696236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.696278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.696309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.696348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.696390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.696436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.696474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.696511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.696550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.696590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.696628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.696669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.696709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.696749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.696788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.696827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.696870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.696914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.696958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.697004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.697051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.697096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.697141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.697185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.697227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.697271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.697312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.697357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.697398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.697448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.466 [2024-07-25 01:11:38.697496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.697539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.697584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.697630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.697678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.697720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.697763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.697811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.697855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.697899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.697937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.697975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.698018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.698061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.698102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.698141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.698184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.698221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.698697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.698739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.698779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.698822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.698862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.698900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.698935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.698975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.699017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.699071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.699116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.699159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.699209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.699256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.699302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.699350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.699396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.699439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.699487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.699532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.699578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.699623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.699666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.699712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.699754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.699800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.699845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.699888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.699933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.699975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.700017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.700066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.700112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.700156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.700203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.700250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.700293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.700339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.700379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.700418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.700448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.700491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.700531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.700569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.700608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.700651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.700690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.700728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.700772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.700811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.700857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.700905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.700944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.700975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.701011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.701051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.701091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.701135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.701174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.701213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.701252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.701293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.701336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.701371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.701900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.701948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.701994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.702049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.467 [2024-07-25 01:11:38.702097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.702142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.702191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.702234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.702280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.702323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.702365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.702410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.702455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.702502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.702546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.702588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.702635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.702689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.702733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.702781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.702821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.702869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.702914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.702955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.702997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.703036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.703079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.703125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.703165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.703213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.703256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.703295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.703325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.703360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.703398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.703450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.703492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.703532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.703570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.703608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.703646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.703686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.703722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.703761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.703797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.703841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.703880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.703919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.703966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.704008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.704063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.704106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.704154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.704198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.704247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.704295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.704339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.704383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.704428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.704473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.704520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.704560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.704611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.705083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.705130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.705170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.705209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.705253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.705293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.705336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.705376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.705414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.705454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.705490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.705531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.705570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.705607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.705648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.705692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.705735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.705773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.705814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.705855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.705905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.705950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.705998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.706041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.706092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.706132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.706167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.468 [2024-07-25 01:11:38.706200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.706237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.706276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.706313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.706346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.706383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.706427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.706469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.706519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.706565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.706611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.706657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.706707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.706751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.706797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.706840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.706886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.706928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.706976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.707024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.707074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.707117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.707160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.707206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.707249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.707296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.707341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.707387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.707433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.707472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.707513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.707553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.707598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.707637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.707682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.707724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.707768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.708282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.708327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.708369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.708409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.708449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.708487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.708523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.708562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.708603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.708641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.708685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.708723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.708762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.708803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.708849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.708896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.708939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.708984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.709032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.709084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.709127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.709172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.709218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.709266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.709318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.709364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.709408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.709451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.469 [2024-07-25 01:11:38.709502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.709547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.709591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.709634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.709681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.709724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.709768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.709810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.709857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.709903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.709950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.709997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.710047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.710096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.710139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.710184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.710232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.710277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.710322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.710366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.710415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.710472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.710519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.710566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.710611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.710656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.710706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.710751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.710797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.710838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.710878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.710923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.710960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.710995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.711034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.711505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.711551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.711591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.711637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.711682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.711723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.711773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.711812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.711848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.711886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.711918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.711954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.711996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.712036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.712086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.712124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.712163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.712204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.712242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.712287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.712331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.712381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.712427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.712469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.712513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.712561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.712612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.712656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.712703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.712746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.712792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.712833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.712876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.712922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.712963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.713005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.713056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.713094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.713133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.713174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.713206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.713245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.713288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.713327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.713369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.713405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.713443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.713491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.713528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.713567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.713609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.713647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.713687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.713725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.470 [2024-07-25 01:11:38.713765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.713802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.713845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.713883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.713918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.713960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.713999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.714040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.714085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.714122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.714624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.714671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.714713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.714758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.714810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.714857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.714903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.714948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.714994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.715037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.715085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.715133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.715178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.715222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.715265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.715307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.715345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.715383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.715429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.715464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.715497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.715539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.715577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.715615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.715662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.715700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.715739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.715788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.715828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.715867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.715906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.715951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.715995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.716034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.716075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.716119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.716159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.716200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.716245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.716289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.716331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.716378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.716423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.716469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.716512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.716562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.716607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.716650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.716694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.716745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.716789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.716837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.716879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.716928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.716973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.717018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.717073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.717115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.717164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.717218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.717271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.717317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.717365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.717855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.717902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.717940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.717977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.718016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.718064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.718106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.718140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.718174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.718213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.718248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.718287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.718322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.718361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.471 [2024-07-25 01:11:38.718399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.718447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.718491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.718531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.718565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.718598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.718636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.718679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.718722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.718761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.718799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.718835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.718872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.718910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.718949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.718985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.719025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.719073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.719118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.719160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.719208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.719252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.719295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.719340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.719383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.719427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.719474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.719517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.719560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.719602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.719647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.719693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.719739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.719785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.719828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.719877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.719909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.719946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.719983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.720021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.720062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.720105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.720141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.720188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.720226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.720275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.720316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.720357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.720394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.720427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.720883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.720918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.720947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.720976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.721005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.721034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.721068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.721097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.721126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.721154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.721184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.721214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.721243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.721274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.721304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.721333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.721363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.721392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.721420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.721449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.721477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.721506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.721534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.721563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.721593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.721621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.721665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.721714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.721757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.721797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.721835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.721875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.721927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.721969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.722018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.722062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.722109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.722152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.722193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.722239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.722284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.722324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.722371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.472 [2024-07-25 01:11:38.722417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.722464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.722507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.722550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.722593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.722636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.722680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.722721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.722768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.722811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.722853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.722894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.722939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.722983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.723028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.723077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.723122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.723165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.723212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.723255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.723765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.723813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.723852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.723890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.723927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.723971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.724017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.724057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.724089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.724127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.724166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.724203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.724240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.724274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.724311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.724350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.724394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.724435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.724477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.724516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.724550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.724585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.724618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.724659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.724698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.724741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.724779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.724818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.724863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.724903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.724947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.724991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.725034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.725086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.725134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.725180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.725230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.725273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.725318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.725363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.725406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.725449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.725490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.725534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.725576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.725622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.725662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.725705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.725746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.725782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.725821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.725855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.725892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.725934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.725970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.726006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.726049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.726091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.726134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.726171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.726209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.726243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.726285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.726326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.726753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.726787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.726816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.726848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.726876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.726904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.726932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.473 [2024-07-25 01:11:38.726960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.726988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.727016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.727049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.727077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.727105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.727133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.727162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.727191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.727219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.727248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.727277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.727305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.727333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.727361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.727390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.727418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.727448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.727486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.727525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.727565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.727594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.727622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.727651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.727680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.727711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.727749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.727784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.727829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.727870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.727909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.727956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.727997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.728031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.728075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.728121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.728165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.728207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.728249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.728293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.728337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 true 00:11:16.474 [2024-07-25 01:11:38.728380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.728423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.728466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.728511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.728550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.728593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.728642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.728685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.728728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.728758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.728796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.728832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.728874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.728910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.728955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.729468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.729513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.729557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.729598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.729636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.729673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.729709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.729745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.729782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.729821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.729859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.729902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.729938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.729978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.474 [2024-07-25 01:11:38.730026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.730076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.730122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.730168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.730211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.730253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.730299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.730341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.730388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.730434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.730476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.730521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.730565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.730610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.730654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.730701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.730760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.730801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.730844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.730884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.730923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.730960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.731005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.731054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.731097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.731133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.731172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.731211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.731245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.731295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.731335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.731371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.731411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.731447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.731485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.731526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.731577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.731621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.731668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.731719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.731760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.731803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.731848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.731893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.731934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.731979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.732022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.732069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.732122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.732169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.732665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.732715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.732753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.732792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.732823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.732861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.732900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.732933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.732972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.733021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.733064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.733103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.733147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.733195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.733238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.733281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.733324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.733367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.733412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.733452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.733489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.733536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.733575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.733615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.733655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.733690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.475 [2024-07-25 01:11:38.733720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.476 [2024-07-25 01:11:38.733764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.476 [2024-07-25 01:11:38.733806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.476 [2024-07-25 01:11:38.733852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.476 [2024-07-25 01:11:38.733897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.476 [2024-07-25 01:11:38.733941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.476 [2024-07-25 01:11:38.733986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.476 [2024-07-25 01:11:38.734022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.476 [2024-07-25 01:11:38.734065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.476 [2024-07-25 01:11:38.734107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.476 [2024-07-25 01:11:38.734146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.476 [2024-07-25 01:11:38.734181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.476 [2024-07-25 01:11:38.734215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.476 [2024-07-25 01:11:38.734250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.476 [2024-07-25 01:11:38.734288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.476 [2024-07-25 01:11:38.734320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.476 [2024-07-25 01:11:38.734349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.476 [2024-07-25 01:11:38.734388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.476 [2024-07-25 01:11:38.734425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.476 [2024-07-25 01:11:38.734454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.476 [2024-07-25 01:11:38.734483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.476 [2024-07-25 01:11:38.734511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.476 [2024-07-25 01:11:38.734539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.476 [2024-07-25 01:11:38.734568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.476 [2024-07-25 01:11:38.734596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.476 [2024-07-25 01:11:38.734624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.476 [2024-07-25 01:11:38.734653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.476 [2024-07-25 01:11:38.734680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.476 [2024-07-25 01:11:38.734708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.476 [2024-07-25 01:11:38.734743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.476 [2024-07-25 01:11:38.734771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.476 [2024-07-25 01:11:38.734798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.476 [2024-07-25 01:11:38.734827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.476 [2024-07-25 01:11:38.734855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.476 [2024-07-25 01:11:38.734883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.476 [2024-07-25 01:11:38.734912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.476 [2024-07-25 01:11:38.734941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.476 [2024-07-25 01:11:38.735356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:16.476 01:11:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 789518 00:11:16.476 01:11:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:17.414 01:11:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:17.674 01:11:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:11:17.674 01:11:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:11:17.674 true 00:11:17.674 01:11:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 789518 00:11:17.674 01:11:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:17.934 01:11:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:18.193 01:11:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:11:18.193 01:11:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:11:18.193 true 00:11:18.452 01:11:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 789518 00:11:18.452 01:11:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:19.400 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:19.400 01:11:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:19.400 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:19.400 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:19.400 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:19.662 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:19.662 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:19.662 01:11:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:11:19.662 01:11:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:11:19.921 true 00:11:19.921 01:11:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 789518 00:11:19.921 01:11:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:20.859 01:11:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:20.859 01:11:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:11:20.859 01:11:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:11:20.859 true 00:11:21.118 01:11:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 789518 00:11:21.118 01:11:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:21.118 01:11:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:21.408 01:11:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:11:21.408 01:11:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:11:21.408 true 00:11:21.667 01:11:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 789518 00:11:21.667 01:11:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:22.607 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:22.607 01:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:22.607 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:22.607 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:22.867 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:22.867 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:22.867 01:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:11:22.867 01:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:11:23.126 true 00:11:23.126 01:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 789518 00:11:23.126 01:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:24.065 01:11:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:24.065 01:11:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:11:24.065 01:11:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:11:24.325 true 00:11:24.325 01:11:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 789518 00:11:24.325 01:11:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:24.325 01:11:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:24.585 01:11:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:11:24.585 01:11:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:11:24.844 true 00:11:24.845 01:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 789518 00:11:24.845 01:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:26.225 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:26.225 01:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:26.225 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:26.225 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:26.225 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:26.225 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:26.225 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:26.225 01:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:11:26.225 01:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:11:26.225 true 00:11:26.484 01:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 789518 00:11:26.484 01:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:27.053 01:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:27.313 01:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:11:27.313 01:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:11:27.573 true 00:11:27.573 01:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 789518 00:11:27.573 01:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:27.833 01:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:27.833 Initializing NVMe Controllers 00:11:27.833 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:27.833 Controller IO queue size 128, less than required. 00:11:27.833 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:27.833 Controller IO queue size 128, less than required. 00:11:27.833 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:27.833 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:27.833 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:11:27.833 Initialization complete. Launching workers. 00:11:27.834 ======================================================== 00:11:27.834 Latency(us) 00:11:27.834 Device Information : IOPS MiB/s Average min max 00:11:27.834 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3261.73 1.59 27866.00 1188.85 1086604.11 00:11:27.834 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17459.89 8.53 7332.30 1955.59 382393.30 00:11:27.834 ======================================================== 00:11:27.834 Total : 20721.62 10.12 10564.46 1188.85 1086604.11 00:11:27.834 00:11:27.834 01:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:11:27.834 01:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:11:28.094 true 00:11:28.094 01:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 789518 00:11:28.094 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (789518) - No such process 00:11:28.094 01:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 789518 00:11:28.094 01:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:28.354 01:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:28.354 01:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:11:28.354 01:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:11:28.354 01:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:11:28.354 01:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:28.354 01:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:11:28.613 null0 00:11:28.613 01:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:28.613 01:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:28.613 01:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:11:28.872 null1 00:11:28.872 01:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:28.872 01:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:28.872 01:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:11:28.872 null2 00:11:29.132 01:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:29.132 01:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:29.132 01:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:11:29.132 null3 00:11:29.132 01:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:29.132 01:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:29.132 01:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:11:29.392 null4 00:11:29.392 01:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:29.392 01:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:29.392 01:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:11:29.652 null5 00:11:29.652 01:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:29.652 01:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:29.652 01:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:11:29.652 null6 00:11:29.652 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:29.652 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:29.652 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:11:29.914 null7 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:29.914 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:11:29.915 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:11:29.915 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:29.915 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:29.915 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:29.915 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:29.915 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.915 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:11:29.915 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:29.915 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:11:29.915 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:29.915 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:29.915 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:29.915 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:29.915 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.915 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 795107 795108 795110 795112 795116 795118 795120 795121 00:11:29.915 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:11:29.915 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:29.915 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:11:29.915 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:29.915 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.915 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:30.233 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:30.233 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:30.233 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:30.233 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:30.233 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:30.233 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:30.233 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:30.233 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:30.233 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.233 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.233 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:30.233 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.233 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.233 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:30.233 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.233 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.233 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:30.233 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.233 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.233 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:30.233 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.233 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.233 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.233 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:30.233 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.233 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:30.233 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.233 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.233 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:30.233 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.233 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.233 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:30.493 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:30.493 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:30.493 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:30.493 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:30.493 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:30.493 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:30.493 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:30.493 01:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:30.751 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.751 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.751 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:30.751 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.751 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.751 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:30.751 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.751 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.751 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:30.751 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.751 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.751 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:30.751 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.751 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.751 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:30.751 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.751 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.751 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:30.751 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.751 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.751 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:30.751 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:30.751 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.751 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:30.751 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:31.011 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:31.011 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:31.011 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:31.011 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:31.011 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:31.011 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:31.011 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:31.011 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.011 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.011 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:31.011 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.011 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.011 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:31.011 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.011 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.011 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:31.011 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.011 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.011 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:31.011 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.011 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.011 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:31.011 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.011 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.011 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:31.011 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.011 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.011 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:31.011 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.012 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.012 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:31.271 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:31.271 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:31.271 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:31.271 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:31.271 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:31.271 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:31.271 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:31.271 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:31.531 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.531 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.531 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:31.531 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.531 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.531 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:31.531 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.531 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.531 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:31.531 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.531 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.531 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:31.531 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.531 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.531 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:31.531 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.531 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.531 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:31.531 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.531 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.531 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.531 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.531 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:31.531 01:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:31.531 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:31.531 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:31.791 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:31.791 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:31.791 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:31.791 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:31.791 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:31.791 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:31.791 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.791 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.791 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:31.791 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.791 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.791 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:31.791 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.791 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.791 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:31.791 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.791 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.791 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.791 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.791 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:31.791 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:31.791 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.791 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.791 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:31.791 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.791 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.792 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:31.792 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.792 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.792 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:32.051 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:32.051 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:32.051 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:32.051 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:32.051 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:32.051 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:32.051 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:32.051 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:32.311 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:32.311 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:32.311 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:32.311 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:32.311 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:32.311 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:32.311 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:32.311 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:32.311 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:32.311 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:32.311 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:32.311 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:32.311 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:32.311 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:32.311 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:32.311 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:32.311 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:32.311 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:32.311 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:32.311 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:32.311 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:32.311 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:32.311 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:32.311 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:32.311 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:32.311 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:32.311 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:32.311 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:32.311 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:32.571 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:32.571 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:32.571 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:32.572 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:32.572 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:32.572 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:32.572 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:32.572 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:32.572 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:32.572 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:32.572 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:32.572 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:32.572 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:32.572 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:32.572 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:32.572 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:32.572 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:32.572 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:32.572 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:32.572 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:32.572 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:32.572 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:32.572 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:32.572 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:32.572 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:32.572 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:32.572 01:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:32.831 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:32.831 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:32.831 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:32.831 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:32.831 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:32.831 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:32.831 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:32.831 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:33.091 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.091 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.091 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:33.091 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.091 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.091 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:33.091 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.091 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.091 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:33.091 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.091 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.091 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.091 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:33.091 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.091 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:33.091 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.091 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.091 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:33.091 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.091 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.091 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:33.091 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.091 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.091 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:33.091 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:33.091 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:33.091 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:33.091 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:33.091 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:33.091 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:33.091 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:33.091 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:33.351 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.351 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.351 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:33.351 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.351 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.351 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.351 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:33.351 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.351 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:33.351 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.351 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.351 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:33.351 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.351 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.351 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:33.351 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.351 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.351 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:33.351 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.351 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.351 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:33.351 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.351 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.351 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:33.611 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:33.611 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:33.611 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:33.611 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:33.611 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:33.611 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:33.611 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:33.611 01:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:33.611 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.611 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.611 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.611 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.871 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.871 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.871 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.871 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.871 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.871 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.871 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.871 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.871 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.871 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.871 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.871 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.871 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:33.871 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:11:33.871 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:33.871 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:11:33.871 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:33.871 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:11:33.871 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:33.871 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:33.871 rmmod nvme_tcp 00:11:33.871 rmmod nvme_fabrics 00:11:33.871 rmmod nvme_keyring 00:11:33.871 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:33.871 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:11:33.871 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:11:33.871 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 789104 ']' 00:11:33.871 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 789104 00:11:33.871 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 789104 ']' 00:11:33.871 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 789104 00:11:33.871 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:11:33.871 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:33.871 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 789104 00:11:33.871 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:33.871 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:33.871 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 789104' 00:11:33.872 killing process with pid 789104 00:11:33.872 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 789104 00:11:33.872 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 789104 00:11:34.132 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:34.132 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:34.132 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:34.132 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:34.132 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:34.132 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.132 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:34.132 01:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.042 01:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:36.042 00:11:36.042 real 0m45.905s 00:11:36.042 user 3m10.260s 00:11:36.042 sys 0m15.206s 00:11:36.042 01:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:36.042 01:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.042 ************************************ 00:11:36.042 END TEST nvmf_ns_hotplug_stress 00:11:36.042 ************************************ 00:11:36.042 01:11:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:36.042 01:11:58 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:36.042 01:11:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:36.042 01:11:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:36.042 01:11:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:36.302 ************************************ 00:11:36.302 START TEST nvmf_connect_stress 00:11:36.302 ************************************ 00:11:36.302 01:11:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:36.302 * Looking for test storage... 00:11:36.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:36.302 01:11:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:36.302 01:11:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:36.302 01:11:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:36.302 01:11:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:36.302 01:11:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:36.302 01:11:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:36.302 01:11:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:36.302 01:11:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:36.302 01:11:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:36.302 01:11:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:36.302 01:11:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:36.302 01:11:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:36.303 01:11:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:36.303 01:11:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:36.303 01:11:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:36.303 01:11:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:36.303 01:11:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:36.303 01:11:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:36.303 01:11:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:36.303 01:11:58 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:36.303 01:11:58 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:36.303 01:11:58 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:36.303 01:11:58 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.303 01:11:58 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.303 01:11:58 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.303 01:11:58 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:36.303 01:11:58 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.303 01:11:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:11:36.303 01:11:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:36.303 01:11:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:36.303 01:11:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:36.303 01:11:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:36.303 01:11:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:36.303 01:11:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:36.303 01:11:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:36.303 01:11:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:36.303 01:11:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:36.303 01:11:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:36.303 01:11:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:36.303 01:11:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:36.303 01:11:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:36.303 01:11:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:36.303 01:11:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.303 01:11:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:36.303 01:11:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.303 01:11:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:36.303 01:11:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:36.303 01:11:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:11:36.303 01:11:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:41.588 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:41.588 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:41.588 Found net devices under 0000:86:00.0: cvl_0_0 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:41.588 Found net devices under 0000:86:00.1: cvl_0_1 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:41.588 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:41.589 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:41.589 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:41.589 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:41.589 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:41.589 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:41.589 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:41.589 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:41.589 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:41.589 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:41.589 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:41.589 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:41.589 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:11:41.589 00:11:41.589 --- 10.0.0.2 ping statistics --- 00:11:41.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.589 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:11:41.589 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:41.589 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:41.589 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.366 ms 00:11:41.589 00:11:41.589 --- 10.0.0.1 ping statistics --- 00:11:41.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.589 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:11:41.589 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:41.589 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:11:41.589 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:41.589 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:41.589 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:41.589 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:41.589 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:41.589 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:41.589 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:41.589 01:12:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:41.589 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:41.589 01:12:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:41.589 01:12:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:41.589 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=799378 00:11:41.589 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 799378 00:11:41.589 01:12:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 799378 ']' 00:11:41.589 01:12:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.589 01:12:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:41.589 01:12:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.589 01:12:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:41.589 01:12:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:41.589 01:12:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:41.589 [2024-07-25 01:12:03.762819] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:11:41.589 [2024-07-25 01:12:03.762862] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:41.589 EAL: No free 2048 kB hugepages reported on node 1 00:11:41.589 [2024-07-25 01:12:03.820010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:41.589 [2024-07-25 01:12:03.900285] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:41.589 [2024-07-25 01:12:03.900321] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:41.589 [2024-07-25 01:12:03.900328] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:41.589 [2024-07-25 01:12:03.900334] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:41.589 [2024-07-25 01:12:03.900339] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:41.589 [2024-07-25 01:12:03.900375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:41.589 [2024-07-25 01:12:03.900462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:41.589 [2024-07-25 01:12:03.900463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:42.159 01:12:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:42.159 01:12:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:11:42.159 01:12:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:42.159 01:12:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:42.159 01:12:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:42.159 01:12:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:42.159 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:42.159 01:12:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.159 01:12:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:42.159 [2024-07-25 01:12:04.611702] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:42.159 01:12:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.159 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:42.159 01:12:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.159 01:12:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:42.159 01:12:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.159 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:42.159 01:12:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.159 01:12:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:42.159 [2024-07-25 01:12:04.652368] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:42.420 NULL1 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=799628 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:42.420 EAL: No free 2048 kB hugepages reported on node 1 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 799628 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.420 01:12:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:42.680 01:12:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.680 01:12:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 799628 00:11:42.680 01:12:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:42.680 01:12:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.680 01:12:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:42.940 01:12:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.940 01:12:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 799628 00:11:42.940 01:12:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:42.940 01:12:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.940 01:12:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:43.510 01:12:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.510 01:12:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 799628 00:11:43.510 01:12:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:43.510 01:12:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.510 01:12:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:43.770 01:12:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.770 01:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 799628 00:11:43.770 01:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:43.770 01:12:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.770 01:12:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:44.030 01:12:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.030 01:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 799628 00:11:44.030 01:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:44.030 01:12:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.030 01:12:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:44.290 01:12:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.290 01:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 799628 00:11:44.290 01:12:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:44.290 01:12:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.290 01:12:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:44.550 01:12:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.550 01:12:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 799628 00:11:44.550 01:12:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:44.550 01:12:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.550 01:12:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:45.120 01:12:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.120 01:12:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 799628 00:11:45.120 01:12:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:45.120 01:12:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.120 01:12:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:45.380 01:12:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.380 01:12:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 799628 00:11:45.380 01:12:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:45.380 01:12:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.380 01:12:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:45.640 01:12:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.640 01:12:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 799628 00:11:45.640 01:12:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:45.640 01:12:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.640 01:12:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:45.900 01:12:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.900 01:12:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 799628 00:11:45.900 01:12:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:45.900 01:12:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.900 01:12:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:46.470 01:12:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.470 01:12:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 799628 00:11:46.470 01:12:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:46.470 01:12:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.470 01:12:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:46.730 01:12:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.730 01:12:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 799628 00:11:46.730 01:12:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:46.730 01:12:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.730 01:12:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:46.991 01:12:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.991 01:12:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 799628 00:11:46.991 01:12:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:46.991 01:12:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.991 01:12:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:47.251 01:12:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.251 01:12:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 799628 00:11:47.251 01:12:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:47.251 01:12:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.251 01:12:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:47.511 01:12:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.511 01:12:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 799628 00:11:47.511 01:12:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:47.511 01:12:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.511 01:12:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:47.821 01:12:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.821 01:12:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 799628 00:11:47.821 01:12:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:47.821 01:12:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.821 01:12:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:48.391 01:12:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.391 01:12:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 799628 00:11:48.391 01:12:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:48.391 01:12:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.391 01:12:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:48.651 01:12:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.651 01:12:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 799628 00:11:48.651 01:12:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:48.651 01:12:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.651 01:12:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:48.911 01:12:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.911 01:12:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 799628 00:11:48.911 01:12:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:48.911 01:12:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.911 01:12:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:49.170 01:12:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.170 01:12:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 799628 00:11:49.170 01:12:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:49.170 01:12:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.170 01:12:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:49.429 01:12:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.429 01:12:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 799628 00:11:49.429 01:12:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:49.429 01:12:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.429 01:12:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:49.999 01:12:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.999 01:12:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 799628 00:11:49.999 01:12:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:49.999 01:12:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.999 01:12:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:50.259 01:12:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.259 01:12:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 799628 00:11:50.259 01:12:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:50.259 01:12:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.259 01:12:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:50.519 01:12:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.519 01:12:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 799628 00:11:50.519 01:12:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:50.520 01:12:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.520 01:12:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:50.784 01:12:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.784 01:12:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 799628 00:11:50.784 01:12:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:50.784 01:12:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.784 01:12:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:51.090 01:12:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.090 01:12:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 799628 00:11:51.090 01:12:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:51.090 01:12:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.090 01:12:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:51.658 01:12:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.658 01:12:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 799628 00:11:51.658 01:12:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:51.658 01:12:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.658 01:12:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:51.918 01:12:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.918 01:12:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 799628 00:11:51.918 01:12:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:51.918 01:12:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.918 01:12:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:52.179 01:12:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.179 01:12:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 799628 00:11:52.179 01:12:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:52.179 01:12:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.179 01:12:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:52.438 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:52.438 01:12:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.438 01:12:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 799628 00:11:52.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (799628) - No such process 00:11:52.438 01:12:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 799628 00:11:52.438 01:12:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:52.438 01:12:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:52.438 01:12:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:52.439 01:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:52.439 01:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:11:52.439 01:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:52.439 01:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:11:52.439 01:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:52.439 01:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:52.439 rmmod nvme_tcp 00:11:52.439 rmmod nvme_fabrics 00:11:52.439 rmmod nvme_keyring 00:11:52.439 01:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:52.439 01:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:11:52.439 01:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:11:52.439 01:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 799378 ']' 00:11:52.439 01:12:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 799378 00:11:52.439 01:12:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 799378 ']' 00:11:52.439 01:12:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 799378 00:11:52.439 01:12:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:11:52.439 01:12:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:52.439 01:12:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 799378 00:11:52.699 01:12:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:52.699 01:12:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:52.699 01:12:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 799378' 00:11:52.699 killing process with pid 799378 00:11:52.699 01:12:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 799378 00:11:52.699 01:12:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 799378 00:11:52.699 01:12:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:52.699 01:12:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:52.699 01:12:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:52.699 01:12:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:52.699 01:12:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:52.699 01:12:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.699 01:12:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:52.699 01:12:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.258 01:12:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:55.258 00:11:55.258 real 0m18.655s 00:11:55.258 user 0m40.723s 00:11:55.258 sys 0m7.910s 00:11:55.258 01:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:55.258 01:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:55.258 ************************************ 00:11:55.258 END TEST nvmf_connect_stress 00:11:55.258 ************************************ 00:11:55.258 01:12:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:55.258 01:12:17 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:55.258 01:12:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:55.258 01:12:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:55.258 01:12:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:55.258 ************************************ 00:11:55.258 START TEST nvmf_fused_ordering 00:11:55.258 ************************************ 00:11:55.258 01:12:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:55.258 * Looking for test storage... 00:11:55.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:55.258 01:12:17 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:55.258 01:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:55.258 01:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:11:55.259 01:12:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:00.541 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:00.541 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:00.541 Found net devices under 0000:86:00.0: cvl_0_0 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:00.541 Found net devices under 0000:86:00.1: cvl_0_1 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:00.541 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:00.542 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:00.542 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:00.542 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:00.542 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:00.542 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:00.542 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:00.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:00.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:12:00.542 00:12:00.542 --- 10.0.0.2 ping statistics --- 00:12:00.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.542 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:12:00.542 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:00.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:00.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:12:00.542 00:12:00.542 --- 10.0.0.1 ping statistics --- 00:12:00.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.542 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:12:00.542 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:00.542 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:12:00.542 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:00.542 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:00.542 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:00.542 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:00.542 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:00.542 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:00.542 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:00.542 01:12:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:00.542 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:00.542 01:12:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:00.542 01:12:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:00.542 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=805165 00:12:00.542 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 805165 00:12:00.542 01:12:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:00.542 01:12:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 805165 ']' 00:12:00.542 01:12:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.542 01:12:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:00.542 01:12:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.542 01:12:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:00.542 01:12:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:00.542 [2024-07-25 01:12:22.561451] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:12:00.542 [2024-07-25 01:12:22.561493] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.542 EAL: No free 2048 kB hugepages reported on node 1 00:12:00.542 [2024-07-25 01:12:22.619650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.542 [2024-07-25 01:12:22.696855] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:00.542 [2024-07-25 01:12:22.696893] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:00.542 [2024-07-25 01:12:22.696900] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:00.542 [2024-07-25 01:12:22.696906] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:00.542 [2024-07-25 01:12:22.696911] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:00.542 [2024-07-25 01:12:22.696935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:01.111 01:12:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:01.111 01:12:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:12:01.111 01:12:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:01.111 01:12:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:01.111 01:12:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:01.111 01:12:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:01.111 01:12:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:01.111 01:12:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.111 01:12:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:01.111 [2024-07-25 01:12:23.399694] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:01.111 01:12:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.111 01:12:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:01.111 01:12:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.111 01:12:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:01.111 01:12:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.111 01:12:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:01.111 01:12:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.111 01:12:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:01.111 [2024-07-25 01:12:23.415859] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:01.111 01:12:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.111 01:12:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:01.111 01:12:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.111 01:12:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:01.111 NULL1 00:12:01.111 01:12:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.112 01:12:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:01.112 01:12:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.112 01:12:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:01.112 01:12:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.112 01:12:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:01.112 01:12:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.112 01:12:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:01.112 01:12:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.112 01:12:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:01.112 [2024-07-25 01:12:23.470439] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:12:01.112 [2024-07-25 01:12:23.470483] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid805198 ] 00:12:01.112 EAL: No free 2048 kB hugepages reported on node 1 00:12:02.053 Attached to nqn.2016-06.io.spdk:cnode1 00:12:02.053 Namespace ID: 1 size: 1GB 00:12:02.053 fused_ordering(0) 00:12:02.053 fused_ordering(1) 00:12:02.053 fused_ordering(2) 00:12:02.053 fused_ordering(3) 00:12:02.053 fused_ordering(4) 00:12:02.053 fused_ordering(5) 00:12:02.053 fused_ordering(6) 00:12:02.053 fused_ordering(7) 00:12:02.053 fused_ordering(8) 00:12:02.053 fused_ordering(9) 00:12:02.053 fused_ordering(10) 00:12:02.053 fused_ordering(11) 00:12:02.053 fused_ordering(12) 00:12:02.053 fused_ordering(13) 00:12:02.053 fused_ordering(14) 00:12:02.053 fused_ordering(15) 00:12:02.053 fused_ordering(16) 00:12:02.053 fused_ordering(17) 00:12:02.053 fused_ordering(18) 00:12:02.053 fused_ordering(19) 00:12:02.053 fused_ordering(20) 00:12:02.053 fused_ordering(21) 00:12:02.053 fused_ordering(22) 00:12:02.053 fused_ordering(23) 00:12:02.053 fused_ordering(24) 00:12:02.053 fused_ordering(25) 00:12:02.053 fused_ordering(26) 00:12:02.053 fused_ordering(27) 00:12:02.053 fused_ordering(28) 00:12:02.053 fused_ordering(29) 00:12:02.053 fused_ordering(30) 00:12:02.053 fused_ordering(31) 00:12:02.053 fused_ordering(32) 00:12:02.053 fused_ordering(33) 00:12:02.053 fused_ordering(34) 00:12:02.053 fused_ordering(35) 00:12:02.053 fused_ordering(36) 00:12:02.053 fused_ordering(37) 00:12:02.053 fused_ordering(38) 00:12:02.053 fused_ordering(39) 00:12:02.053 fused_ordering(40) 00:12:02.053 fused_ordering(41) 00:12:02.053 fused_ordering(42) 00:12:02.053 fused_ordering(43) 00:12:02.053 fused_ordering(44) 00:12:02.053 fused_ordering(45) 00:12:02.053 fused_ordering(46) 00:12:02.053 fused_ordering(47) 00:12:02.053 fused_ordering(48) 00:12:02.053 fused_ordering(49) 00:12:02.053 fused_ordering(50) 00:12:02.053 fused_ordering(51) 00:12:02.053 fused_ordering(52) 00:12:02.053 fused_ordering(53) 00:12:02.053 fused_ordering(54) 00:12:02.053 fused_ordering(55) 00:12:02.053 fused_ordering(56) 00:12:02.053 fused_ordering(57) 00:12:02.053 fused_ordering(58) 00:12:02.053 fused_ordering(59) 00:12:02.053 fused_ordering(60) 00:12:02.053 fused_ordering(61) 00:12:02.053 fused_ordering(62) 00:12:02.053 fused_ordering(63) 00:12:02.053 fused_ordering(64) 00:12:02.053 fused_ordering(65) 00:12:02.053 fused_ordering(66) 00:12:02.053 fused_ordering(67) 00:12:02.053 fused_ordering(68) 00:12:02.053 fused_ordering(69) 00:12:02.053 fused_ordering(70) 00:12:02.053 fused_ordering(71) 00:12:02.053 fused_ordering(72) 00:12:02.053 fused_ordering(73) 00:12:02.053 fused_ordering(74) 00:12:02.053 fused_ordering(75) 00:12:02.053 fused_ordering(76) 00:12:02.053 fused_ordering(77) 00:12:02.053 fused_ordering(78) 00:12:02.053 fused_ordering(79) 00:12:02.053 fused_ordering(80) 00:12:02.053 fused_ordering(81) 00:12:02.053 fused_ordering(82) 00:12:02.053 fused_ordering(83) 00:12:02.053 fused_ordering(84) 00:12:02.053 fused_ordering(85) 00:12:02.053 fused_ordering(86) 00:12:02.053 fused_ordering(87) 00:12:02.053 fused_ordering(88) 00:12:02.053 fused_ordering(89) 00:12:02.053 fused_ordering(90) 00:12:02.053 fused_ordering(91) 00:12:02.053 fused_ordering(92) 00:12:02.053 fused_ordering(93) 00:12:02.053 fused_ordering(94) 00:12:02.053 fused_ordering(95) 00:12:02.053 fused_ordering(96) 00:12:02.053 fused_ordering(97) 00:12:02.053 fused_ordering(98) 00:12:02.053 fused_ordering(99) 00:12:02.053 fused_ordering(100) 00:12:02.053 fused_ordering(101) 00:12:02.053 fused_ordering(102) 00:12:02.053 fused_ordering(103) 00:12:02.053 fused_ordering(104) 00:12:02.053 fused_ordering(105) 00:12:02.053 fused_ordering(106) 00:12:02.053 fused_ordering(107) 00:12:02.053 fused_ordering(108) 00:12:02.053 fused_ordering(109) 00:12:02.053 fused_ordering(110) 00:12:02.053 fused_ordering(111) 00:12:02.053 fused_ordering(112) 00:12:02.053 fused_ordering(113) 00:12:02.054 fused_ordering(114) 00:12:02.054 fused_ordering(115) 00:12:02.054 fused_ordering(116) 00:12:02.054 fused_ordering(117) 00:12:02.054 fused_ordering(118) 00:12:02.054 fused_ordering(119) 00:12:02.054 fused_ordering(120) 00:12:02.054 fused_ordering(121) 00:12:02.054 fused_ordering(122) 00:12:02.054 fused_ordering(123) 00:12:02.054 fused_ordering(124) 00:12:02.054 fused_ordering(125) 00:12:02.054 fused_ordering(126) 00:12:02.054 fused_ordering(127) 00:12:02.054 fused_ordering(128) 00:12:02.054 fused_ordering(129) 00:12:02.054 fused_ordering(130) 00:12:02.054 fused_ordering(131) 00:12:02.054 fused_ordering(132) 00:12:02.054 fused_ordering(133) 00:12:02.054 fused_ordering(134) 00:12:02.054 fused_ordering(135) 00:12:02.054 fused_ordering(136) 00:12:02.054 fused_ordering(137) 00:12:02.054 fused_ordering(138) 00:12:02.054 fused_ordering(139) 00:12:02.054 fused_ordering(140) 00:12:02.054 fused_ordering(141) 00:12:02.054 fused_ordering(142) 00:12:02.054 fused_ordering(143) 00:12:02.054 fused_ordering(144) 00:12:02.054 fused_ordering(145) 00:12:02.054 fused_ordering(146) 00:12:02.054 fused_ordering(147) 00:12:02.054 fused_ordering(148) 00:12:02.054 fused_ordering(149) 00:12:02.054 fused_ordering(150) 00:12:02.054 fused_ordering(151) 00:12:02.054 fused_ordering(152) 00:12:02.054 fused_ordering(153) 00:12:02.054 fused_ordering(154) 00:12:02.054 fused_ordering(155) 00:12:02.054 fused_ordering(156) 00:12:02.054 fused_ordering(157) 00:12:02.054 fused_ordering(158) 00:12:02.054 fused_ordering(159) 00:12:02.054 fused_ordering(160) 00:12:02.054 fused_ordering(161) 00:12:02.054 fused_ordering(162) 00:12:02.054 fused_ordering(163) 00:12:02.054 fused_ordering(164) 00:12:02.054 fused_ordering(165) 00:12:02.054 fused_ordering(166) 00:12:02.054 fused_ordering(167) 00:12:02.054 fused_ordering(168) 00:12:02.054 fused_ordering(169) 00:12:02.054 fused_ordering(170) 00:12:02.054 fused_ordering(171) 00:12:02.054 fused_ordering(172) 00:12:02.054 fused_ordering(173) 00:12:02.054 fused_ordering(174) 00:12:02.054 fused_ordering(175) 00:12:02.054 fused_ordering(176) 00:12:02.054 fused_ordering(177) 00:12:02.054 fused_ordering(178) 00:12:02.054 fused_ordering(179) 00:12:02.054 fused_ordering(180) 00:12:02.054 fused_ordering(181) 00:12:02.054 fused_ordering(182) 00:12:02.054 fused_ordering(183) 00:12:02.054 fused_ordering(184) 00:12:02.054 fused_ordering(185) 00:12:02.054 fused_ordering(186) 00:12:02.054 fused_ordering(187) 00:12:02.054 fused_ordering(188) 00:12:02.054 fused_ordering(189) 00:12:02.054 fused_ordering(190) 00:12:02.054 fused_ordering(191) 00:12:02.054 fused_ordering(192) 00:12:02.054 fused_ordering(193) 00:12:02.054 fused_ordering(194) 00:12:02.054 fused_ordering(195) 00:12:02.054 fused_ordering(196) 00:12:02.054 fused_ordering(197) 00:12:02.054 fused_ordering(198) 00:12:02.054 fused_ordering(199) 00:12:02.054 fused_ordering(200) 00:12:02.054 fused_ordering(201) 00:12:02.054 fused_ordering(202) 00:12:02.054 fused_ordering(203) 00:12:02.054 fused_ordering(204) 00:12:02.054 fused_ordering(205) 00:12:02.994 fused_ordering(206) 00:12:02.994 fused_ordering(207) 00:12:02.994 fused_ordering(208) 00:12:02.994 fused_ordering(209) 00:12:02.994 fused_ordering(210) 00:12:02.994 fused_ordering(211) 00:12:02.994 fused_ordering(212) 00:12:02.994 fused_ordering(213) 00:12:02.994 fused_ordering(214) 00:12:02.994 fused_ordering(215) 00:12:02.994 fused_ordering(216) 00:12:02.994 fused_ordering(217) 00:12:02.994 fused_ordering(218) 00:12:02.994 fused_ordering(219) 00:12:02.994 fused_ordering(220) 00:12:02.994 fused_ordering(221) 00:12:02.994 fused_ordering(222) 00:12:02.994 fused_ordering(223) 00:12:02.994 fused_ordering(224) 00:12:02.994 fused_ordering(225) 00:12:02.994 fused_ordering(226) 00:12:02.994 fused_ordering(227) 00:12:02.994 fused_ordering(228) 00:12:02.994 fused_ordering(229) 00:12:02.994 fused_ordering(230) 00:12:02.994 fused_ordering(231) 00:12:02.994 fused_ordering(232) 00:12:02.994 fused_ordering(233) 00:12:02.994 fused_ordering(234) 00:12:02.994 fused_ordering(235) 00:12:02.994 fused_ordering(236) 00:12:02.994 fused_ordering(237) 00:12:02.994 fused_ordering(238) 00:12:02.994 fused_ordering(239) 00:12:02.994 fused_ordering(240) 00:12:02.994 fused_ordering(241) 00:12:02.994 fused_ordering(242) 00:12:02.994 fused_ordering(243) 00:12:02.994 fused_ordering(244) 00:12:02.994 fused_ordering(245) 00:12:02.994 fused_ordering(246) 00:12:02.994 fused_ordering(247) 00:12:02.994 fused_ordering(248) 00:12:02.994 fused_ordering(249) 00:12:02.994 fused_ordering(250) 00:12:02.994 fused_ordering(251) 00:12:02.994 fused_ordering(252) 00:12:02.994 fused_ordering(253) 00:12:02.994 fused_ordering(254) 00:12:02.994 fused_ordering(255) 00:12:02.994 fused_ordering(256) 00:12:02.994 fused_ordering(257) 00:12:02.994 fused_ordering(258) 00:12:02.994 fused_ordering(259) 00:12:02.994 fused_ordering(260) 00:12:02.994 fused_ordering(261) 00:12:02.994 fused_ordering(262) 00:12:02.994 fused_ordering(263) 00:12:02.994 fused_ordering(264) 00:12:02.994 fused_ordering(265) 00:12:02.994 fused_ordering(266) 00:12:02.994 fused_ordering(267) 00:12:02.994 fused_ordering(268) 00:12:02.994 fused_ordering(269) 00:12:02.994 fused_ordering(270) 00:12:02.994 fused_ordering(271) 00:12:02.994 fused_ordering(272) 00:12:02.994 fused_ordering(273) 00:12:02.994 fused_ordering(274) 00:12:02.994 fused_ordering(275) 00:12:02.994 fused_ordering(276) 00:12:02.994 fused_ordering(277) 00:12:02.994 fused_ordering(278) 00:12:02.994 fused_ordering(279) 00:12:02.994 fused_ordering(280) 00:12:02.994 fused_ordering(281) 00:12:02.994 fused_ordering(282) 00:12:02.994 fused_ordering(283) 00:12:02.994 fused_ordering(284) 00:12:02.994 fused_ordering(285) 00:12:02.994 fused_ordering(286) 00:12:02.994 fused_ordering(287) 00:12:02.994 fused_ordering(288) 00:12:02.994 fused_ordering(289) 00:12:02.994 fused_ordering(290) 00:12:02.995 fused_ordering(291) 00:12:02.995 fused_ordering(292) 00:12:02.995 fused_ordering(293) 00:12:02.995 fused_ordering(294) 00:12:02.995 fused_ordering(295) 00:12:02.995 fused_ordering(296) 00:12:02.995 fused_ordering(297) 00:12:02.995 fused_ordering(298) 00:12:02.995 fused_ordering(299) 00:12:02.995 fused_ordering(300) 00:12:02.995 fused_ordering(301) 00:12:02.995 fused_ordering(302) 00:12:02.995 fused_ordering(303) 00:12:02.995 fused_ordering(304) 00:12:02.995 fused_ordering(305) 00:12:02.995 fused_ordering(306) 00:12:02.995 fused_ordering(307) 00:12:02.995 fused_ordering(308) 00:12:02.995 fused_ordering(309) 00:12:02.995 fused_ordering(310) 00:12:02.995 fused_ordering(311) 00:12:02.995 fused_ordering(312) 00:12:02.995 fused_ordering(313) 00:12:02.995 fused_ordering(314) 00:12:02.995 fused_ordering(315) 00:12:02.995 fused_ordering(316) 00:12:02.995 fused_ordering(317) 00:12:02.995 fused_ordering(318) 00:12:02.995 fused_ordering(319) 00:12:02.995 fused_ordering(320) 00:12:02.995 fused_ordering(321) 00:12:02.995 fused_ordering(322) 00:12:02.995 fused_ordering(323) 00:12:02.995 fused_ordering(324) 00:12:02.995 fused_ordering(325) 00:12:02.995 fused_ordering(326) 00:12:02.995 fused_ordering(327) 00:12:02.995 fused_ordering(328) 00:12:02.995 fused_ordering(329) 00:12:02.995 fused_ordering(330) 00:12:02.995 fused_ordering(331) 00:12:02.995 fused_ordering(332) 00:12:02.995 fused_ordering(333) 00:12:02.995 fused_ordering(334) 00:12:02.995 fused_ordering(335) 00:12:02.995 fused_ordering(336) 00:12:02.995 fused_ordering(337) 00:12:02.995 fused_ordering(338) 00:12:02.995 fused_ordering(339) 00:12:02.995 fused_ordering(340) 00:12:02.995 fused_ordering(341) 00:12:02.995 fused_ordering(342) 00:12:02.995 fused_ordering(343) 00:12:02.995 fused_ordering(344) 00:12:02.995 fused_ordering(345) 00:12:02.995 fused_ordering(346) 00:12:02.995 fused_ordering(347) 00:12:02.995 fused_ordering(348) 00:12:02.995 fused_ordering(349) 00:12:02.995 fused_ordering(350) 00:12:02.995 fused_ordering(351) 00:12:02.995 fused_ordering(352) 00:12:02.995 fused_ordering(353) 00:12:02.995 fused_ordering(354) 00:12:02.995 fused_ordering(355) 00:12:02.995 fused_ordering(356) 00:12:02.995 fused_ordering(357) 00:12:02.995 fused_ordering(358) 00:12:02.995 fused_ordering(359) 00:12:02.995 fused_ordering(360) 00:12:02.995 fused_ordering(361) 00:12:02.995 fused_ordering(362) 00:12:02.995 fused_ordering(363) 00:12:02.995 fused_ordering(364) 00:12:02.995 fused_ordering(365) 00:12:02.995 fused_ordering(366) 00:12:02.995 fused_ordering(367) 00:12:02.995 fused_ordering(368) 00:12:02.995 fused_ordering(369) 00:12:02.995 fused_ordering(370) 00:12:02.995 fused_ordering(371) 00:12:02.995 fused_ordering(372) 00:12:02.995 fused_ordering(373) 00:12:02.995 fused_ordering(374) 00:12:02.995 fused_ordering(375) 00:12:02.995 fused_ordering(376) 00:12:02.995 fused_ordering(377) 00:12:02.995 fused_ordering(378) 00:12:02.995 fused_ordering(379) 00:12:02.995 fused_ordering(380) 00:12:02.995 fused_ordering(381) 00:12:02.995 fused_ordering(382) 00:12:02.995 fused_ordering(383) 00:12:02.995 fused_ordering(384) 00:12:02.995 fused_ordering(385) 00:12:02.995 fused_ordering(386) 00:12:02.995 fused_ordering(387) 00:12:02.995 fused_ordering(388) 00:12:02.995 fused_ordering(389) 00:12:02.995 fused_ordering(390) 00:12:02.995 fused_ordering(391) 00:12:02.995 fused_ordering(392) 00:12:02.995 fused_ordering(393) 00:12:02.995 fused_ordering(394) 00:12:02.995 fused_ordering(395) 00:12:02.995 fused_ordering(396) 00:12:02.995 fused_ordering(397) 00:12:02.995 fused_ordering(398) 00:12:02.995 fused_ordering(399) 00:12:02.995 fused_ordering(400) 00:12:02.995 fused_ordering(401) 00:12:02.995 fused_ordering(402) 00:12:02.995 fused_ordering(403) 00:12:02.995 fused_ordering(404) 00:12:02.995 fused_ordering(405) 00:12:02.995 fused_ordering(406) 00:12:02.995 fused_ordering(407) 00:12:02.995 fused_ordering(408) 00:12:02.995 fused_ordering(409) 00:12:02.995 fused_ordering(410) 00:12:03.936 fused_ordering(411) 00:12:03.936 fused_ordering(412) 00:12:03.936 fused_ordering(413) 00:12:03.936 fused_ordering(414) 00:12:03.936 fused_ordering(415) 00:12:03.936 fused_ordering(416) 00:12:03.936 fused_ordering(417) 00:12:03.936 fused_ordering(418) 00:12:03.936 fused_ordering(419) 00:12:03.936 fused_ordering(420) 00:12:03.936 fused_ordering(421) 00:12:03.936 fused_ordering(422) 00:12:03.936 fused_ordering(423) 00:12:03.936 fused_ordering(424) 00:12:03.936 fused_ordering(425) 00:12:03.936 fused_ordering(426) 00:12:03.936 fused_ordering(427) 00:12:03.936 fused_ordering(428) 00:12:03.936 fused_ordering(429) 00:12:03.936 fused_ordering(430) 00:12:03.936 fused_ordering(431) 00:12:03.936 fused_ordering(432) 00:12:03.936 fused_ordering(433) 00:12:03.936 fused_ordering(434) 00:12:03.936 fused_ordering(435) 00:12:03.936 fused_ordering(436) 00:12:03.936 fused_ordering(437) 00:12:03.936 fused_ordering(438) 00:12:03.936 fused_ordering(439) 00:12:03.936 fused_ordering(440) 00:12:03.936 fused_ordering(441) 00:12:03.936 fused_ordering(442) 00:12:03.936 fused_ordering(443) 00:12:03.936 fused_ordering(444) 00:12:03.936 fused_ordering(445) 00:12:03.936 fused_ordering(446) 00:12:03.936 fused_ordering(447) 00:12:03.936 fused_ordering(448) 00:12:03.936 fused_ordering(449) 00:12:03.936 fused_ordering(450) 00:12:03.936 fused_ordering(451) 00:12:03.936 fused_ordering(452) 00:12:03.936 fused_ordering(453) 00:12:03.936 fused_ordering(454) 00:12:03.936 fused_ordering(455) 00:12:03.936 fused_ordering(456) 00:12:03.936 fused_ordering(457) 00:12:03.936 fused_ordering(458) 00:12:03.936 fused_ordering(459) 00:12:03.936 fused_ordering(460) 00:12:03.936 fused_ordering(461) 00:12:03.936 fused_ordering(462) 00:12:03.936 fused_ordering(463) 00:12:03.936 fused_ordering(464) 00:12:03.936 fused_ordering(465) 00:12:03.936 fused_ordering(466) 00:12:03.936 fused_ordering(467) 00:12:03.936 fused_ordering(468) 00:12:03.936 fused_ordering(469) 00:12:03.936 fused_ordering(470) 00:12:03.936 fused_ordering(471) 00:12:03.936 fused_ordering(472) 00:12:03.936 fused_ordering(473) 00:12:03.936 fused_ordering(474) 00:12:03.936 fused_ordering(475) 00:12:03.936 fused_ordering(476) 00:12:03.936 fused_ordering(477) 00:12:03.936 fused_ordering(478) 00:12:03.936 fused_ordering(479) 00:12:03.936 fused_ordering(480) 00:12:03.936 fused_ordering(481) 00:12:03.936 fused_ordering(482) 00:12:03.936 fused_ordering(483) 00:12:03.936 fused_ordering(484) 00:12:03.936 fused_ordering(485) 00:12:03.936 fused_ordering(486) 00:12:03.936 fused_ordering(487) 00:12:03.936 fused_ordering(488) 00:12:03.936 fused_ordering(489) 00:12:03.936 fused_ordering(490) 00:12:03.936 fused_ordering(491) 00:12:03.936 fused_ordering(492) 00:12:03.936 fused_ordering(493) 00:12:03.936 fused_ordering(494) 00:12:03.936 fused_ordering(495) 00:12:03.936 fused_ordering(496) 00:12:03.936 fused_ordering(497) 00:12:03.936 fused_ordering(498) 00:12:03.936 fused_ordering(499) 00:12:03.936 fused_ordering(500) 00:12:03.936 fused_ordering(501) 00:12:03.936 fused_ordering(502) 00:12:03.936 fused_ordering(503) 00:12:03.936 fused_ordering(504) 00:12:03.936 fused_ordering(505) 00:12:03.936 fused_ordering(506) 00:12:03.936 fused_ordering(507) 00:12:03.936 fused_ordering(508) 00:12:03.936 fused_ordering(509) 00:12:03.936 fused_ordering(510) 00:12:03.936 fused_ordering(511) 00:12:03.936 fused_ordering(512) 00:12:03.936 fused_ordering(513) 00:12:03.936 fused_ordering(514) 00:12:03.936 fused_ordering(515) 00:12:03.936 fused_ordering(516) 00:12:03.936 fused_ordering(517) 00:12:03.936 fused_ordering(518) 00:12:03.936 fused_ordering(519) 00:12:03.936 fused_ordering(520) 00:12:03.936 fused_ordering(521) 00:12:03.936 fused_ordering(522) 00:12:03.936 fused_ordering(523) 00:12:03.936 fused_ordering(524) 00:12:03.936 fused_ordering(525) 00:12:03.936 fused_ordering(526) 00:12:03.936 fused_ordering(527) 00:12:03.936 fused_ordering(528) 00:12:03.936 fused_ordering(529) 00:12:03.936 fused_ordering(530) 00:12:03.936 fused_ordering(531) 00:12:03.936 fused_ordering(532) 00:12:03.936 fused_ordering(533) 00:12:03.936 fused_ordering(534) 00:12:03.936 fused_ordering(535) 00:12:03.936 fused_ordering(536) 00:12:03.936 fused_ordering(537) 00:12:03.936 fused_ordering(538) 00:12:03.936 fused_ordering(539) 00:12:03.936 fused_ordering(540) 00:12:03.936 fused_ordering(541) 00:12:03.936 fused_ordering(542) 00:12:03.936 fused_ordering(543) 00:12:03.936 fused_ordering(544) 00:12:03.936 fused_ordering(545) 00:12:03.936 fused_ordering(546) 00:12:03.936 fused_ordering(547) 00:12:03.936 fused_ordering(548) 00:12:03.936 fused_ordering(549) 00:12:03.936 fused_ordering(550) 00:12:03.936 fused_ordering(551) 00:12:03.936 fused_ordering(552) 00:12:03.936 fused_ordering(553) 00:12:03.936 fused_ordering(554) 00:12:03.936 fused_ordering(555) 00:12:03.936 fused_ordering(556) 00:12:03.936 fused_ordering(557) 00:12:03.936 fused_ordering(558) 00:12:03.936 fused_ordering(559) 00:12:03.936 fused_ordering(560) 00:12:03.936 fused_ordering(561) 00:12:03.936 fused_ordering(562) 00:12:03.936 fused_ordering(563) 00:12:03.936 fused_ordering(564) 00:12:03.936 fused_ordering(565) 00:12:03.936 fused_ordering(566) 00:12:03.936 fused_ordering(567) 00:12:03.936 fused_ordering(568) 00:12:03.936 fused_ordering(569) 00:12:03.936 fused_ordering(570) 00:12:03.936 fused_ordering(571) 00:12:03.936 fused_ordering(572) 00:12:03.936 fused_ordering(573) 00:12:03.936 fused_ordering(574) 00:12:03.936 fused_ordering(575) 00:12:03.936 fused_ordering(576) 00:12:03.936 fused_ordering(577) 00:12:03.936 fused_ordering(578) 00:12:03.936 fused_ordering(579) 00:12:03.936 fused_ordering(580) 00:12:03.936 fused_ordering(581) 00:12:03.936 fused_ordering(582) 00:12:03.936 fused_ordering(583) 00:12:03.936 fused_ordering(584) 00:12:03.936 fused_ordering(585) 00:12:03.936 fused_ordering(586) 00:12:03.936 fused_ordering(587) 00:12:03.936 fused_ordering(588) 00:12:03.936 fused_ordering(589) 00:12:03.936 fused_ordering(590) 00:12:03.936 fused_ordering(591) 00:12:03.936 fused_ordering(592) 00:12:03.936 fused_ordering(593) 00:12:03.936 fused_ordering(594) 00:12:03.936 fused_ordering(595) 00:12:03.936 fused_ordering(596) 00:12:03.936 fused_ordering(597) 00:12:03.936 fused_ordering(598) 00:12:03.936 fused_ordering(599) 00:12:03.936 fused_ordering(600) 00:12:03.936 fused_ordering(601) 00:12:03.936 fused_ordering(602) 00:12:03.936 fused_ordering(603) 00:12:03.936 fused_ordering(604) 00:12:03.936 fused_ordering(605) 00:12:03.936 fused_ordering(606) 00:12:03.936 fused_ordering(607) 00:12:03.936 fused_ordering(608) 00:12:03.936 fused_ordering(609) 00:12:03.936 fused_ordering(610) 00:12:03.936 fused_ordering(611) 00:12:03.936 fused_ordering(612) 00:12:03.936 fused_ordering(613) 00:12:03.936 fused_ordering(614) 00:12:03.936 fused_ordering(615) 00:12:04.877 fused_ordering(616) 00:12:04.877 fused_ordering(617) 00:12:04.877 fused_ordering(618) 00:12:04.877 fused_ordering(619) 00:12:04.877 fused_ordering(620) 00:12:04.877 fused_ordering(621) 00:12:04.877 fused_ordering(622) 00:12:04.877 fused_ordering(623) 00:12:04.877 fused_ordering(624) 00:12:04.877 fused_ordering(625) 00:12:04.877 fused_ordering(626) 00:12:04.877 fused_ordering(627) 00:12:04.877 fused_ordering(628) 00:12:04.877 fused_ordering(629) 00:12:04.877 fused_ordering(630) 00:12:04.877 fused_ordering(631) 00:12:04.877 fused_ordering(632) 00:12:04.877 fused_ordering(633) 00:12:04.877 fused_ordering(634) 00:12:04.877 fused_ordering(635) 00:12:04.877 fused_ordering(636) 00:12:04.877 fused_ordering(637) 00:12:04.877 fused_ordering(638) 00:12:04.877 fused_ordering(639) 00:12:04.877 fused_ordering(640) 00:12:04.877 fused_ordering(641) 00:12:04.877 fused_ordering(642) 00:12:04.877 fused_ordering(643) 00:12:04.877 fused_ordering(644) 00:12:04.877 fused_ordering(645) 00:12:04.877 fused_ordering(646) 00:12:04.877 fused_ordering(647) 00:12:04.877 fused_ordering(648) 00:12:04.877 fused_ordering(649) 00:12:04.877 fused_ordering(650) 00:12:04.877 fused_ordering(651) 00:12:04.877 fused_ordering(652) 00:12:04.877 fused_ordering(653) 00:12:04.877 fused_ordering(654) 00:12:04.877 fused_ordering(655) 00:12:04.877 fused_ordering(656) 00:12:04.877 fused_ordering(657) 00:12:04.877 fused_ordering(658) 00:12:04.877 fused_ordering(659) 00:12:04.877 fused_ordering(660) 00:12:04.877 fused_ordering(661) 00:12:04.877 fused_ordering(662) 00:12:04.877 fused_ordering(663) 00:12:04.877 fused_ordering(664) 00:12:04.877 fused_ordering(665) 00:12:04.877 fused_ordering(666) 00:12:04.877 fused_ordering(667) 00:12:04.877 fused_ordering(668) 00:12:04.877 fused_ordering(669) 00:12:04.877 fused_ordering(670) 00:12:04.877 fused_ordering(671) 00:12:04.877 fused_ordering(672) 00:12:04.877 fused_ordering(673) 00:12:04.877 fused_ordering(674) 00:12:04.877 fused_ordering(675) 00:12:04.877 fused_ordering(676) 00:12:04.877 fused_ordering(677) 00:12:04.877 fused_ordering(678) 00:12:04.877 fused_ordering(679) 00:12:04.877 fused_ordering(680) 00:12:04.877 fused_ordering(681) 00:12:04.877 fused_ordering(682) 00:12:04.877 fused_ordering(683) 00:12:04.877 fused_ordering(684) 00:12:04.877 fused_ordering(685) 00:12:04.877 fused_ordering(686) 00:12:04.877 fused_ordering(687) 00:12:04.877 fused_ordering(688) 00:12:04.877 fused_ordering(689) 00:12:04.877 fused_ordering(690) 00:12:04.877 fused_ordering(691) 00:12:04.877 fused_ordering(692) 00:12:04.877 fused_ordering(693) 00:12:04.877 fused_ordering(694) 00:12:04.877 fused_ordering(695) 00:12:04.877 fused_ordering(696) 00:12:04.877 fused_ordering(697) 00:12:04.877 fused_ordering(698) 00:12:04.877 fused_ordering(699) 00:12:04.877 fused_ordering(700) 00:12:04.877 fused_ordering(701) 00:12:04.877 fused_ordering(702) 00:12:04.877 fused_ordering(703) 00:12:04.877 fused_ordering(704) 00:12:04.877 fused_ordering(705) 00:12:04.877 fused_ordering(706) 00:12:04.877 fused_ordering(707) 00:12:04.877 fused_ordering(708) 00:12:04.877 fused_ordering(709) 00:12:04.877 fused_ordering(710) 00:12:04.877 fused_ordering(711) 00:12:04.877 fused_ordering(712) 00:12:04.877 fused_ordering(713) 00:12:04.877 fused_ordering(714) 00:12:04.877 fused_ordering(715) 00:12:04.877 fused_ordering(716) 00:12:04.877 fused_ordering(717) 00:12:04.877 fused_ordering(718) 00:12:04.877 fused_ordering(719) 00:12:04.877 fused_ordering(720) 00:12:04.877 fused_ordering(721) 00:12:04.877 fused_ordering(722) 00:12:04.877 fused_ordering(723) 00:12:04.877 fused_ordering(724) 00:12:04.877 fused_ordering(725) 00:12:04.877 fused_ordering(726) 00:12:04.877 fused_ordering(727) 00:12:04.877 fused_ordering(728) 00:12:04.877 fused_ordering(729) 00:12:04.877 fused_ordering(730) 00:12:04.877 fused_ordering(731) 00:12:04.877 fused_ordering(732) 00:12:04.877 fused_ordering(733) 00:12:04.877 fused_ordering(734) 00:12:04.877 fused_ordering(735) 00:12:04.877 fused_ordering(736) 00:12:04.877 fused_ordering(737) 00:12:04.877 fused_ordering(738) 00:12:04.877 fused_ordering(739) 00:12:04.877 fused_ordering(740) 00:12:04.877 fused_ordering(741) 00:12:04.877 fused_ordering(742) 00:12:04.877 fused_ordering(743) 00:12:04.877 fused_ordering(744) 00:12:04.877 fused_ordering(745) 00:12:04.877 fused_ordering(746) 00:12:04.877 fused_ordering(747) 00:12:04.877 fused_ordering(748) 00:12:04.877 fused_ordering(749) 00:12:04.877 fused_ordering(750) 00:12:04.877 fused_ordering(751) 00:12:04.877 fused_ordering(752) 00:12:04.877 fused_ordering(753) 00:12:04.877 fused_ordering(754) 00:12:04.877 fused_ordering(755) 00:12:04.877 fused_ordering(756) 00:12:04.877 fused_ordering(757) 00:12:04.877 fused_ordering(758) 00:12:04.877 fused_ordering(759) 00:12:04.877 fused_ordering(760) 00:12:04.877 fused_ordering(761) 00:12:04.877 fused_ordering(762) 00:12:04.877 fused_ordering(763) 00:12:04.877 fused_ordering(764) 00:12:04.877 fused_ordering(765) 00:12:04.877 fused_ordering(766) 00:12:04.877 fused_ordering(767) 00:12:04.877 fused_ordering(768) 00:12:04.877 fused_ordering(769) 00:12:04.877 fused_ordering(770) 00:12:04.877 fused_ordering(771) 00:12:04.877 fused_ordering(772) 00:12:04.877 fused_ordering(773) 00:12:04.877 fused_ordering(774) 00:12:04.877 fused_ordering(775) 00:12:04.877 fused_ordering(776) 00:12:04.877 fused_ordering(777) 00:12:04.877 fused_ordering(778) 00:12:04.877 fused_ordering(779) 00:12:04.877 fused_ordering(780) 00:12:04.877 fused_ordering(781) 00:12:04.877 fused_ordering(782) 00:12:04.877 fused_ordering(783) 00:12:04.877 fused_ordering(784) 00:12:04.877 fused_ordering(785) 00:12:04.877 fused_ordering(786) 00:12:04.877 fused_ordering(787) 00:12:04.877 fused_ordering(788) 00:12:04.877 fused_ordering(789) 00:12:04.877 fused_ordering(790) 00:12:04.877 fused_ordering(791) 00:12:04.877 fused_ordering(792) 00:12:04.877 fused_ordering(793) 00:12:04.877 fused_ordering(794) 00:12:04.877 fused_ordering(795) 00:12:04.877 fused_ordering(796) 00:12:04.877 fused_ordering(797) 00:12:04.877 fused_ordering(798) 00:12:04.877 fused_ordering(799) 00:12:04.877 fused_ordering(800) 00:12:04.877 fused_ordering(801) 00:12:04.877 fused_ordering(802) 00:12:04.877 fused_ordering(803) 00:12:04.877 fused_ordering(804) 00:12:04.877 fused_ordering(805) 00:12:04.877 fused_ordering(806) 00:12:04.877 fused_ordering(807) 00:12:04.877 fused_ordering(808) 00:12:04.877 fused_ordering(809) 00:12:04.877 fused_ordering(810) 00:12:04.877 fused_ordering(811) 00:12:04.877 fused_ordering(812) 00:12:04.877 fused_ordering(813) 00:12:04.877 fused_ordering(814) 00:12:04.877 fused_ordering(815) 00:12:04.877 fused_ordering(816) 00:12:04.877 fused_ordering(817) 00:12:04.877 fused_ordering(818) 00:12:04.877 fused_ordering(819) 00:12:04.877 fused_ordering(820) 00:12:05.818 fused_ordering(821) 00:12:05.818 fused_ordering(822) 00:12:05.818 fused_ordering(823) 00:12:05.818 fused_ordering(824) 00:12:05.818 fused_ordering(825) 00:12:05.818 fused_ordering(826) 00:12:05.818 fused_ordering(827) 00:12:05.818 fused_ordering(828) 00:12:05.818 fused_ordering(829) 00:12:05.818 fused_ordering(830) 00:12:05.818 fused_ordering(831) 00:12:05.818 fused_ordering(832) 00:12:05.818 fused_ordering(833) 00:12:05.818 fused_ordering(834) 00:12:05.818 fused_ordering(835) 00:12:05.818 fused_ordering(836) 00:12:05.818 fused_ordering(837) 00:12:05.818 fused_ordering(838) 00:12:05.818 fused_ordering(839) 00:12:05.818 fused_ordering(840) 00:12:05.818 fused_ordering(841) 00:12:05.818 fused_ordering(842) 00:12:05.818 fused_ordering(843) 00:12:05.818 fused_ordering(844) 00:12:05.818 fused_ordering(845) 00:12:05.818 fused_ordering(846) 00:12:05.818 fused_ordering(847) 00:12:05.818 fused_ordering(848) 00:12:05.818 fused_ordering(849) 00:12:05.818 fused_ordering(850) 00:12:05.818 fused_ordering(851) 00:12:05.818 fused_ordering(852) 00:12:05.818 fused_ordering(853) 00:12:05.818 fused_ordering(854) 00:12:05.818 fused_ordering(855) 00:12:05.818 fused_ordering(856) 00:12:05.818 fused_ordering(857) 00:12:05.818 fused_ordering(858) 00:12:05.818 fused_ordering(859) 00:12:05.818 fused_ordering(860) 00:12:05.818 fused_ordering(861) 00:12:05.818 fused_ordering(862) 00:12:05.818 fused_ordering(863) 00:12:05.818 fused_ordering(864) 00:12:05.818 fused_ordering(865) 00:12:05.818 fused_ordering(866) 00:12:05.818 fused_ordering(867) 00:12:05.818 fused_ordering(868) 00:12:05.818 fused_ordering(869) 00:12:05.818 fused_ordering(870) 00:12:05.818 fused_ordering(871) 00:12:05.818 fused_ordering(872) 00:12:05.818 fused_ordering(873) 00:12:05.818 fused_ordering(874) 00:12:05.818 fused_ordering(875) 00:12:05.818 fused_ordering(876) 00:12:05.818 fused_ordering(877) 00:12:05.818 fused_ordering(878) 00:12:05.818 fused_ordering(879) 00:12:05.818 fused_ordering(880) 00:12:05.818 fused_ordering(881) 00:12:05.818 fused_ordering(882) 00:12:05.818 fused_ordering(883) 00:12:05.818 fused_ordering(884) 00:12:05.818 fused_ordering(885) 00:12:05.818 fused_ordering(886) 00:12:05.818 fused_ordering(887) 00:12:05.818 fused_ordering(888) 00:12:05.818 fused_ordering(889) 00:12:05.818 fused_ordering(890) 00:12:05.818 fused_ordering(891) 00:12:05.818 fused_ordering(892) 00:12:05.818 fused_ordering(893) 00:12:05.818 fused_ordering(894) 00:12:05.818 fused_ordering(895) 00:12:05.818 fused_ordering(896) 00:12:05.818 fused_ordering(897) 00:12:05.818 fused_ordering(898) 00:12:05.818 fused_ordering(899) 00:12:05.818 fused_ordering(900) 00:12:05.818 fused_ordering(901) 00:12:05.818 fused_ordering(902) 00:12:05.818 fused_ordering(903) 00:12:05.818 fused_ordering(904) 00:12:05.818 fused_ordering(905) 00:12:05.818 fused_ordering(906) 00:12:05.818 fused_ordering(907) 00:12:05.818 fused_ordering(908) 00:12:05.818 fused_ordering(909) 00:12:05.818 fused_ordering(910) 00:12:05.818 fused_ordering(911) 00:12:05.818 fused_ordering(912) 00:12:05.818 fused_ordering(913) 00:12:05.818 fused_ordering(914) 00:12:05.818 fused_ordering(915) 00:12:05.818 fused_ordering(916) 00:12:05.818 fused_ordering(917) 00:12:05.818 fused_ordering(918) 00:12:05.818 fused_ordering(919) 00:12:05.818 fused_ordering(920) 00:12:05.818 fused_ordering(921) 00:12:05.818 fused_ordering(922) 00:12:05.818 fused_ordering(923) 00:12:05.818 fused_ordering(924) 00:12:05.818 fused_ordering(925) 00:12:05.818 fused_ordering(926) 00:12:05.818 fused_ordering(927) 00:12:05.818 fused_ordering(928) 00:12:05.818 fused_ordering(929) 00:12:05.818 fused_ordering(930) 00:12:05.818 fused_ordering(931) 00:12:05.818 fused_ordering(932) 00:12:05.818 fused_ordering(933) 00:12:05.818 fused_ordering(934) 00:12:05.818 fused_ordering(935) 00:12:05.818 fused_ordering(936) 00:12:05.818 fused_ordering(937) 00:12:05.818 fused_ordering(938) 00:12:05.818 fused_ordering(939) 00:12:05.818 fused_ordering(940) 00:12:05.818 fused_ordering(941) 00:12:05.818 fused_ordering(942) 00:12:05.818 fused_ordering(943) 00:12:05.818 fused_ordering(944) 00:12:05.818 fused_ordering(945) 00:12:05.818 fused_ordering(946) 00:12:05.818 fused_ordering(947) 00:12:05.818 fused_ordering(948) 00:12:05.818 fused_ordering(949) 00:12:05.818 fused_ordering(950) 00:12:05.818 fused_ordering(951) 00:12:05.818 fused_ordering(952) 00:12:05.818 fused_ordering(953) 00:12:05.818 fused_ordering(954) 00:12:05.818 fused_ordering(955) 00:12:05.818 fused_ordering(956) 00:12:05.818 fused_ordering(957) 00:12:05.818 fused_ordering(958) 00:12:05.818 fused_ordering(959) 00:12:05.818 fused_ordering(960) 00:12:05.818 fused_ordering(961) 00:12:05.818 fused_ordering(962) 00:12:05.818 fused_ordering(963) 00:12:05.818 fused_ordering(964) 00:12:05.818 fused_ordering(965) 00:12:05.818 fused_ordering(966) 00:12:05.818 fused_ordering(967) 00:12:05.818 fused_ordering(968) 00:12:05.818 fused_ordering(969) 00:12:05.818 fused_ordering(970) 00:12:05.818 fused_ordering(971) 00:12:05.818 fused_ordering(972) 00:12:05.818 fused_ordering(973) 00:12:05.818 fused_ordering(974) 00:12:05.818 fused_ordering(975) 00:12:05.818 fused_ordering(976) 00:12:05.818 fused_ordering(977) 00:12:05.818 fused_ordering(978) 00:12:05.818 fused_ordering(979) 00:12:05.818 fused_ordering(980) 00:12:05.818 fused_ordering(981) 00:12:05.818 fused_ordering(982) 00:12:05.818 fused_ordering(983) 00:12:05.818 fused_ordering(984) 00:12:05.819 fused_ordering(985) 00:12:05.819 fused_ordering(986) 00:12:05.819 fused_ordering(987) 00:12:05.819 fused_ordering(988) 00:12:05.819 fused_ordering(989) 00:12:05.819 fused_ordering(990) 00:12:05.819 fused_ordering(991) 00:12:05.819 fused_ordering(992) 00:12:05.819 fused_ordering(993) 00:12:05.819 fused_ordering(994) 00:12:05.819 fused_ordering(995) 00:12:05.819 fused_ordering(996) 00:12:05.819 fused_ordering(997) 00:12:05.819 fused_ordering(998) 00:12:05.819 fused_ordering(999) 00:12:05.819 fused_ordering(1000) 00:12:05.819 fused_ordering(1001) 00:12:05.819 fused_ordering(1002) 00:12:05.819 fused_ordering(1003) 00:12:05.819 fused_ordering(1004) 00:12:05.819 fused_ordering(1005) 00:12:05.819 fused_ordering(1006) 00:12:05.819 fused_ordering(1007) 00:12:05.819 fused_ordering(1008) 00:12:05.819 fused_ordering(1009) 00:12:05.819 fused_ordering(1010) 00:12:05.819 fused_ordering(1011) 00:12:05.819 fused_ordering(1012) 00:12:05.819 fused_ordering(1013) 00:12:05.819 fused_ordering(1014) 00:12:05.819 fused_ordering(1015) 00:12:05.819 fused_ordering(1016) 00:12:05.819 fused_ordering(1017) 00:12:05.819 fused_ordering(1018) 00:12:05.819 fused_ordering(1019) 00:12:05.819 fused_ordering(1020) 00:12:05.819 fused_ordering(1021) 00:12:05.819 fused_ordering(1022) 00:12:05.819 fused_ordering(1023) 00:12:05.819 01:12:28 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:05.819 01:12:28 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:05.819 01:12:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:05.819 01:12:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:12:05.819 01:12:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:05.819 01:12:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:12:05.819 01:12:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:05.819 01:12:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:05.819 rmmod nvme_tcp 00:12:05.819 rmmod nvme_fabrics 00:12:05.819 rmmod nvme_keyring 00:12:05.819 01:12:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:05.819 01:12:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:12:05.819 01:12:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:12:05.819 01:12:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 805165 ']' 00:12:05.819 01:12:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 805165 00:12:05.819 01:12:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 805165 ']' 00:12:05.819 01:12:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 805165 00:12:05.819 01:12:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:12:05.819 01:12:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:05.819 01:12:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 805165 00:12:05.819 01:12:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:05.819 01:12:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:05.819 01:12:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 805165' 00:12:05.819 killing process with pid 805165 00:12:05.819 01:12:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 805165 00:12:05.819 01:12:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 805165 00:12:06.079 01:12:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:06.079 01:12:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:06.079 01:12:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:06.079 01:12:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:06.079 01:12:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:06.079 01:12:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.079 01:12:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:06.079 01:12:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.621 01:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:08.621 00:12:08.621 real 0m13.278s 00:12:08.621 user 0m8.953s 00:12:08.621 sys 0m7.257s 00:12:08.621 01:12:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:08.621 01:12:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:08.621 ************************************ 00:12:08.621 END TEST nvmf_fused_ordering 00:12:08.621 ************************************ 00:12:08.621 01:12:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:08.621 01:12:30 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:08.621 01:12:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:08.621 01:12:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:08.621 01:12:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:08.621 ************************************ 00:12:08.621 START TEST nvmf_delete_subsystem 00:12:08.621 ************************************ 00:12:08.621 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:08.621 * Looking for test storage... 00:12:08.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:08.621 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:08.621 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:12:08.621 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:08.621 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:08.621 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:08.621 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:08.621 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:08.621 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:08.621 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:08.621 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:08.621 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:08.621 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:08.621 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:08.621 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:08.621 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:08.621 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:08.621 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:08.621 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:08.621 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:08.621 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:08.621 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:08.621 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:08.621 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.622 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.622 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.622 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:12:08.622 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.622 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:12:08.622 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:08.622 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:08.622 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:08.622 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:08.622 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:08.622 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:08.622 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:08.622 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:08.622 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:12:08.622 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:08.622 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:08.622 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:08.622 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:08.622 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:08.622 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.622 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:08.622 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.622 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:08.622 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:08.622 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:12:08.622 01:12:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:13.907 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:13.907 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:13.907 Found net devices under 0000:86:00.0: cvl_0_0 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:13.907 Found net devices under 0000:86:00.1: cvl_0_1 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:13.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:13.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:12:13.907 00:12:13.907 --- 10.0.0.2 ping statistics --- 00:12:13.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:13.907 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:13.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:13.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.447 ms 00:12:13.907 00:12:13.907 --- 10.0.0.1 ping statistics --- 00:12:13.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:13.907 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:13.907 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:13.908 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=809395 00:12:13.908 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 809395 00:12:13.908 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:12:13.908 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 809395 ']' 00:12:13.908 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.908 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:13.908 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.908 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:13.908 01:12:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:13.908 [2024-07-25 01:12:35.689085] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:12:13.908 [2024-07-25 01:12:35.689131] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:13.908 EAL: No free 2048 kB hugepages reported on node 1 00:12:13.908 [2024-07-25 01:12:35.747714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:13.908 [2024-07-25 01:12:35.827480] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:13.908 [2024-07-25 01:12:35.827514] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:13.908 [2024-07-25 01:12:35.827521] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:13.908 [2024-07-25 01:12:35.827528] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:13.908 [2024-07-25 01:12:35.827534] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:13.908 [2024-07-25 01:12:35.827580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:13.908 [2024-07-25 01:12:35.827583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.168 01:12:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:14.168 01:12:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:12:14.168 01:12:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:14.168 01:12:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:14.168 01:12:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:14.168 01:12:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:14.168 01:12:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:14.168 01:12:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.168 01:12:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:14.168 [2024-07-25 01:12:36.535653] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:14.168 01:12:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.168 01:12:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:14.168 01:12:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.168 01:12:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:14.168 01:12:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.168 01:12:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:14.168 01:12:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.168 01:12:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:14.168 [2024-07-25 01:12:36.559818] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:14.168 01:12:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.168 01:12:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:14.168 01:12:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.168 01:12:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:14.168 NULL1 00:12:14.168 01:12:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.168 01:12:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:14.168 01:12:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.168 01:12:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:14.168 Delay0 00:12:14.168 01:12:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.168 01:12:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:14.168 01:12:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.168 01:12:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:14.168 01:12:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.168 01:12:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=809640 00:12:14.168 01:12:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:12:14.168 01:12:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:14.168 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.168 [2024-07-25 01:12:36.646473] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:16.137 01:12:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:16.137 01:12:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.137 01:12:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 Write completed with error (sct=0, sc=8) 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 starting I/O failed: -6 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 Write completed with error (sct=0, sc=8) 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 starting I/O failed: -6 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 Write completed with error (sct=0, sc=8) 00:12:16.412 Write completed with error (sct=0, sc=8) 00:12:16.412 starting I/O failed: -6 00:12:16.412 Write completed with error (sct=0, sc=8) 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 Write completed with error (sct=0, sc=8) 00:12:16.412 starting I/O failed: -6 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 starting I/O failed: -6 00:12:16.412 Write completed with error (sct=0, sc=8) 00:12:16.412 Write completed with error (sct=0, sc=8) 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 starting I/O failed: -6 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 starting I/O failed: -6 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 Write completed with error (sct=0, sc=8) 00:12:16.412 starting I/O failed: -6 00:12:16.412 Write completed with error (sct=0, sc=8) 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 starting I/O failed: -6 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 starting I/O failed: -6 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 Write completed with error (sct=0, sc=8) 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 starting I/O failed: -6 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 [2024-07-25 01:12:38.789794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd0700 is same with the state(5) to be set 00:12:16.412 Write completed with error (sct=0, sc=8) 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 Write completed with error (sct=0, sc=8) 00:12:16.412 starting I/O failed: -6 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 starting I/O failed: -6 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 Write completed with error (sct=0, sc=8) 00:12:16.412 starting I/O failed: -6 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 Write completed with error (sct=0, sc=8) 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 starting I/O failed: -6 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 Write completed with error (sct=0, sc=8) 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.412 Read completed with error (sct=0, sc=8) 00:12:16.413 starting I/O failed: -6 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Write completed with error (sct=0, sc=8) 00:12:16.413 starting I/O failed: -6 00:12:16.413 Write completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Write completed with error (sct=0, sc=8) 00:12:16.413 Write completed with error (sct=0, sc=8) 00:12:16.413 starting I/O failed: -6 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 starting I/O failed: -6 00:12:16.413 Write completed with error (sct=0, sc=8) 00:12:16.413 Write completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Write completed with error (sct=0, sc=8) 00:12:16.413 starting I/O failed: -6 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 [2024-07-25 01:12:38.791068] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fba90000c00 is same with the state(5) to be set 00:12:16.413 Write completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Write completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Write completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Write completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Write completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Write completed with error (sct=0, sc=8) 00:12:16.413 Write completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Write completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Write completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Write completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Write completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Write completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Write completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Write completed with error (sct=0, sc=8) 00:12:16.413 Write completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Write completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:16.413 Read completed with error (sct=0, sc=8) 00:12:17.353 [2024-07-25 01:12:39.746872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd1ac0 is same with the state(5) to be set 00:12:17.353 Write completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Write completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Write completed with error (sct=0, sc=8) 00:12:17.353 Write completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Write completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Write completed with error (sct=0, sc=8) 00:12:17.353 Write completed with error (sct=0, sc=8) 00:12:17.353 [2024-07-25 01:12:39.793347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd0a20 is same with the state(5) to be set 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Write completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Write completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Write completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 [2024-07-25 01:12:39.794504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd03e0 is same with the state(5) to be set 00:12:17.353 Write completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Write completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Write completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Write completed with error (sct=0, sc=8) 00:12:17.353 Write completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 [2024-07-25 01:12:39.794592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fba9000d310 is same with the state(5) to be set 00:12:17.353 Write completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Write completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Write completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.353 Write completed with error (sct=0, sc=8) 00:12:17.353 Read completed with error (sct=0, sc=8) 00:12:17.354 Read completed with error (sct=0, sc=8) 00:12:17.354 Read completed with error (sct=0, sc=8) 00:12:17.354 Read completed with error (sct=0, sc=8) 00:12:17.354 Write completed with error (sct=0, sc=8) 00:12:17.354 Write completed with error (sct=0, sc=8) 00:12:17.354 Read completed with error (sct=0, sc=8) 00:12:17.354 Write completed with error (sct=0, sc=8) 00:12:17.354 Read completed with error (sct=0, sc=8) 00:12:17.354 Write completed with error (sct=0, sc=8) 00:12:17.354 Read completed with error (sct=0, sc=8) 00:12:17.354 [2024-07-25 01:12:39.794716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd0000 is same with the state(5) to be set 00:12:17.354 Initializing NVMe Controllers 00:12:17.354 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:17.354 Controller IO queue size 128, less than required. 00:12:17.354 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:17.354 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:17.354 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:17.354 Initialization complete. Launching workers. 00:12:17.354 ======================================================== 00:12:17.354 Latency(us) 00:12:17.354 Device Information : IOPS MiB/s Average min max 00:12:17.354 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.88 0.08 967410.57 605.17 1011388.33 00:12:17.354 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 152.00 0.07 891688.93 208.76 1013286.79 00:12:17.354 ======================================================== 00:12:17.354 Total : 321.89 0.16 931653.13 208.76 1013286.79 00:12:17.354 00:12:17.354 [2024-07-25 01:12:39.795447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd1ac0 (9): Bad file descriptor 00:12:17.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:12:17.354 01:12:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.354 01:12:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:12:17.354 01:12:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 809640 00:12:17.354 01:12:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:17.924 01:12:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:12:17.924 01:12:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 809640 00:12:17.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (809640) - No such process 00:12:17.924 01:12:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 809640 00:12:17.924 01:12:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:12:17.924 01:12:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 809640 00:12:17.924 01:12:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:12:17.925 01:12:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:17.925 01:12:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:12:17.925 01:12:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:17.925 01:12:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 809640 00:12:17.925 01:12:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:12:17.925 01:12:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:17.925 01:12:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:17.925 01:12:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:17.925 01:12:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:17.925 01:12:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.925 01:12:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:17.925 01:12:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.925 01:12:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:17.925 01:12:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.925 01:12:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:17.925 [2024-07-25 01:12:40.321174] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:17.925 01:12:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.925 01:12:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:17.925 01:12:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.925 01:12:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:17.925 01:12:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.925 01:12:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=810219 00:12:17.925 01:12:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:12:17.925 01:12:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:17.925 01:12:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 810219 00:12:17.925 01:12:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:17.925 EAL: No free 2048 kB hugepages reported on node 1 00:12:17.925 [2024-07-25 01:12:40.381386] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:18.495 01:12:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:18.495 01:12:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 810219 00:12:18.495 01:12:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:19.066 01:12:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:19.066 01:12:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 810219 00:12:19.066 01:12:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:19.635 01:12:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:19.635 01:12:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 810219 00:12:19.635 01:12:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:19.895 01:12:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:19.895 01:12:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 810219 00:12:19.895 01:12:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:20.464 01:12:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:20.464 01:12:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 810219 00:12:20.464 01:12:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:21.048 01:12:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:21.048 01:12:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 810219 00:12:21.048 01:12:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:21.048 Initializing NVMe Controllers 00:12:21.048 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:21.048 Controller IO queue size 128, less than required. 00:12:21.048 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:21.048 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:21.048 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:21.048 Initialization complete. Launching workers. 00:12:21.048 ======================================================== 00:12:21.048 Latency(us) 00:12:21.048 Device Information : IOPS MiB/s Average min max 00:12:21.048 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003767.69 1000378.86 1011327.13 00:12:21.048 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005540.76 1000471.86 1013405.36 00:12:21.048 ======================================================== 00:12:21.048 Total : 256.00 0.12 1004654.22 1000378.86 1013405.36 00:12:21.048 00:12:21.619 01:12:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:21.619 01:12:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 810219 00:12:21.619 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (810219) - No such process 00:12:21.619 01:12:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 810219 00:12:21.619 01:12:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:12:21.619 01:12:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:12:21.619 01:12:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:21.619 01:12:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:12:21.619 01:12:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:21.619 01:12:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:12:21.619 01:12:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:21.619 01:12:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:21.619 rmmod nvme_tcp 00:12:21.619 rmmod nvme_fabrics 00:12:21.619 rmmod nvme_keyring 00:12:21.619 01:12:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:21.619 01:12:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:12:21.619 01:12:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:12:21.619 01:12:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 809395 ']' 00:12:21.619 01:12:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 809395 00:12:21.619 01:12:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 809395 ']' 00:12:21.619 01:12:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 809395 00:12:21.619 01:12:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:12:21.619 01:12:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:21.619 01:12:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 809395 00:12:21.619 01:12:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:21.619 01:12:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:21.619 01:12:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 809395' 00:12:21.619 killing process with pid 809395 00:12:21.619 01:12:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 809395 00:12:21.619 01:12:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 809395 00:12:21.879 01:12:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:21.879 01:12:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:21.879 01:12:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:21.879 01:12:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:21.879 01:12:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:21.879 01:12:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.879 01:12:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:21.879 01:12:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.790 01:12:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:23.790 00:12:23.790 real 0m15.613s 00:12:23.790 user 0m29.945s 00:12:23.790 sys 0m4.653s 00:12:23.790 01:12:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:23.790 01:12:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:23.790 ************************************ 00:12:23.790 END TEST nvmf_delete_subsystem 00:12:23.790 ************************************ 00:12:23.790 01:12:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:23.790 01:12:46 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:23.790 01:12:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:23.790 01:12:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:23.790 01:12:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:24.049 ************************************ 00:12:24.049 START TEST nvmf_ns_masking 00:12:24.049 ************************************ 00:12:24.049 01:12:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:24.049 * Looking for test storage... 00:12:24.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:24.049 01:12:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:24.049 01:12:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:24.049 01:12:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:24.049 01:12:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:24.049 01:12:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:24.049 01:12:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:24.049 01:12:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:24.049 01:12:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:24.049 01:12:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:24.049 01:12:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:24.049 01:12:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:24.049 01:12:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:24.049 01:12:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:24.049 01:12:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:24.049 01:12:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:24.049 01:12:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:24.049 01:12:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:24.049 01:12:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:24.049 01:12:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:24.049 01:12:46 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:24.049 01:12:46 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:24.049 01:12:46 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:24.049 01:12:46 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.049 01:12:46 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.049 01:12:46 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.049 01:12:46 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:24.049 01:12:46 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.049 01:12:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:12:24.049 01:12:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:24.049 01:12:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:24.049 01:12:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:24.050 01:12:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:24.050 01:12:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:24.050 01:12:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:24.050 01:12:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:24.050 01:12:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:24.050 01:12:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:24.050 01:12:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:24.050 01:12:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:24.050 01:12:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:24.050 01:12:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=be67d385-53bf-43d2-9cf8-e810dcb70b1b 00:12:24.050 01:12:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:24.050 01:12:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=d2610498-87fa-4995-bb90-932a9b3fbee4 00:12:24.050 01:12:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:24.050 01:12:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:24.050 01:12:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:24.050 01:12:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:24.050 01:12:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=ade64080-3984-4f85-9b8e-8ee392b1aef9 00:12:24.050 01:12:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:24.050 01:12:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:24.050 01:12:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:24.050 01:12:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:24.050 01:12:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:24.050 01:12:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:24.050 01:12:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.050 01:12:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:24.050 01:12:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.050 01:12:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:24.050 01:12:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:24.050 01:12:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:12:24.050 01:12:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:29.330 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:29.330 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:12:29.330 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:29.330 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:29.330 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:29.330 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:29.330 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:29.330 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:12:29.330 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:29.330 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:12:29.330 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:12:29.330 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:12:29.330 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:12:29.330 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:12:29.330 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:12:29.330 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:29.330 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:29.331 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:29.331 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:29.331 Found net devices under 0000:86:00.0: cvl_0_0 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:29.331 Found net devices under 0000:86:00.1: cvl_0_1 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:29.331 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:29.331 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:12:29.331 00:12:29.331 --- 10.0.0.2 ping statistics --- 00:12:29.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.331 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:29.331 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:29.331 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.415 ms 00:12:29.331 00:12:29.331 --- 10.0.0.1 ping statistics --- 00:12:29.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.331 rtt min/avg/max/mdev = 0.415/0.415/0.415/0.000 ms 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=814270 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 814270 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 814270 ']' 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:29.331 01:12:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:29.331 [2024-07-25 01:12:51.642715] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:12:29.331 [2024-07-25 01:12:51.642762] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:29.331 EAL: No free 2048 kB hugepages reported on node 1 00:12:29.331 [2024-07-25 01:12:51.701040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.331 [2024-07-25 01:12:51.780689] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:29.332 [2024-07-25 01:12:51.780724] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:29.332 [2024-07-25 01:12:51.780731] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:29.332 [2024-07-25 01:12:51.780737] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:29.332 [2024-07-25 01:12:51.780742] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:29.332 [2024-07-25 01:12:51.780757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.270 01:12:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:30.270 01:12:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:30.270 01:12:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:30.270 01:12:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:30.270 01:12:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:30.270 01:12:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:30.270 01:12:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:30.270 [2024-07-25 01:12:52.616619] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:30.270 01:12:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:30.270 01:12:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:30.271 01:12:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:30.529 Malloc1 00:12:30.529 01:12:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:30.529 Malloc2 00:12:30.529 01:12:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:30.789 01:12:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:31.047 01:12:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:31.047 [2024-07-25 01:12:53.470083] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:31.047 01:12:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:31.047 01:12:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ade64080-3984-4f85-9b8e-8ee392b1aef9 -a 10.0.0.2 -s 4420 -i 4 00:12:31.306 01:12:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:31.306 01:12:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:31.306 01:12:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:31.306 01:12:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:31.306 01:12:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:33.215 01:12:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:33.215 01:12:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:33.215 01:12:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:33.215 01:12:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:33.215 01:12:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:33.215 01:12:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:33.215 01:12:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:33.215 01:12:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:33.215 01:12:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:33.215 01:12:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:33.215 01:12:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:33.215 01:12:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:33.215 01:12:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:33.215 [ 0]:0x1 00:12:33.215 01:12:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:33.215 01:12:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:33.215 01:12:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=eef90f5383a04ba8bdc3d7efc453b6d9 00:12:33.215 01:12:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ eef90f5383a04ba8bdc3d7efc453b6d9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:33.215 01:12:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:33.476 01:12:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:33.476 01:12:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:33.476 01:12:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:33.476 [ 0]:0x1 00:12:33.476 01:12:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:33.476 01:12:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:33.476 01:12:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=eef90f5383a04ba8bdc3d7efc453b6d9 00:12:33.476 01:12:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ eef90f5383a04ba8bdc3d7efc453b6d9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:33.476 01:12:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:33.476 01:12:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:33.476 01:12:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:33.476 [ 1]:0x2 00:12:33.476 01:12:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:33.476 01:12:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:33.736 01:12:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a7011c463ad04cb99d3d8484820e8b04 00:12:33.736 01:12:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a7011c463ad04cb99d3d8484820e8b04 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:33.736 01:12:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:33.736 01:12:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:33.736 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.736 01:12:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:33.736 01:12:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:33.997 01:12:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:33.997 01:12:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ade64080-3984-4f85-9b8e-8ee392b1aef9 -a 10.0.0.2 -s 4420 -i 4 00:12:34.257 01:12:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:34.257 01:12:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:34.257 01:12:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:34.257 01:12:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:12:34.257 01:12:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:12:34.257 01:12:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:36.168 01:12:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:36.168 01:12:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:36.168 01:12:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:36.168 01:12:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:36.168 01:12:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:36.168 01:12:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:36.168 01:12:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:36.168 01:12:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:36.168 01:12:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:36.168 01:12:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:36.168 01:12:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:36.168 01:12:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:36.168 01:12:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:36.168 01:12:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:36.168 01:12:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:36.168 01:12:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:36.168 01:12:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:36.168 01:12:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:36.168 01:12:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:36.168 01:12:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:36.428 01:12:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:36.428 01:12:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:36.428 01:12:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:36.428 01:12:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:36.428 01:12:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:36.428 01:12:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:36.428 01:12:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:36.428 01:12:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:36.428 01:12:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:36.428 01:12:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:36.428 01:12:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:36.428 [ 0]:0x2 00:12:36.428 01:12:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:36.428 01:12:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:36.428 01:12:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a7011c463ad04cb99d3d8484820e8b04 00:12:36.428 01:12:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a7011c463ad04cb99d3d8484820e8b04 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:36.428 01:12:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:36.686 01:12:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:36.686 01:12:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:36.686 01:12:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:36.686 [ 0]:0x1 00:12:36.686 01:12:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:36.686 01:12:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:36.686 01:12:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=eef90f5383a04ba8bdc3d7efc453b6d9 00:12:36.686 01:12:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ eef90f5383a04ba8bdc3d7efc453b6d9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:36.686 01:12:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:36.686 01:12:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:36.686 01:12:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:36.686 [ 1]:0x2 00:12:36.686 01:12:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:36.686 01:12:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:36.686 01:12:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a7011c463ad04cb99d3d8484820e8b04 00:12:36.686 01:12:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a7011c463ad04cb99d3d8484820e8b04 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:36.687 01:12:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:36.946 01:12:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:36.946 01:12:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:36.946 01:12:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:36.946 01:12:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:36.946 01:12:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:36.946 01:12:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:36.946 01:12:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:36.946 01:12:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:36.946 01:12:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:36.946 01:12:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:36.946 01:12:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:36.946 01:12:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:36.946 01:12:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:36.946 01:12:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:36.946 01:12:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:36.946 01:12:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:36.946 01:12:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:36.946 01:12:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:36.946 01:12:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:36.946 01:12:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:36.946 01:12:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:36.946 [ 0]:0x2 00:12:36.946 01:12:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:36.946 01:12:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:36.946 01:12:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a7011c463ad04cb99d3d8484820e8b04 00:12:36.946 01:12:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a7011c463ad04cb99d3d8484820e8b04 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:36.946 01:12:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:36.946 01:12:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:36.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.946 01:12:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:37.206 01:12:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:37.206 01:12:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ade64080-3984-4f85-9b8e-8ee392b1aef9 -a 10.0.0.2 -s 4420 -i 4 00:12:37.206 01:12:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:37.206 01:12:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:37.206 01:12:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:37.206 01:12:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:37.206 01:12:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:37.206 01:12:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:39.755 01:13:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:39.755 01:13:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:39.755 01:13:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:39.755 01:13:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:39.755 01:13:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:39.755 01:13:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:39.755 01:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:39.755 01:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:39.755 01:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:39.755 01:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:39.755 01:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:39.755 01:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:39.755 01:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:39.755 [ 0]:0x1 00:12:39.755 01:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:39.755 01:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:39.755 01:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=eef90f5383a04ba8bdc3d7efc453b6d9 00:12:39.755 01:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ eef90f5383a04ba8bdc3d7efc453b6d9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.755 01:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:39.755 01:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:39.755 01:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:39.755 [ 1]:0x2 00:12:39.755 01:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:39.755 01:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:39.755 01:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a7011c463ad04cb99d3d8484820e8b04 00:12:39.755 01:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a7011c463ad04cb99d3d8484820e8b04 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.755 01:13:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:39.755 01:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:39.755 01:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:39.755 01:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:39.755 01:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:39.755 01:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:39.755 01:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:39.755 01:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:39.755 01:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:39.755 01:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:39.755 01:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:39.755 01:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:39.755 01:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:39.755 01:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:39.755 01:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.755 01:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:39.755 01:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:39.755 01:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:39.755 01:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:39.755 01:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:39.755 01:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:39.755 01:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:39.755 [ 0]:0x2 00:12:39.755 01:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:39.755 01:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:39.755 01:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a7011c463ad04cb99d3d8484820e8b04 00:12:39.755 01:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a7011c463ad04cb99d3d8484820e8b04 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.755 01:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:39.755 01:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:39.755 01:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:39.755 01:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:39.755 01:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:39.755 01:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:39.755 01:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:39.755 01:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:39.755 01:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:39.755 01:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:39.755 01:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:39.755 01:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:40.015 [2024-07-25 01:13:02.278894] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:40.015 request: 00:12:40.015 { 00:12:40.015 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:40.015 "nsid": 2, 00:12:40.015 "host": "nqn.2016-06.io.spdk:host1", 00:12:40.015 "method": "nvmf_ns_remove_host", 00:12:40.015 "req_id": 1 00:12:40.015 } 00:12:40.015 Got JSON-RPC error response 00:12:40.015 response: 00:12:40.015 { 00:12:40.015 "code": -32602, 00:12:40.015 "message": "Invalid parameters" 00:12:40.015 } 00:12:40.015 01:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:40.016 01:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:40.016 01:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:40.016 01:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:40.016 01:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:40.016 01:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:40.016 01:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:40.016 01:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:40.016 01:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:40.016 01:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:40.016 01:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:40.016 01:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:40.016 01:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:40.016 01:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:40.016 01:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:40.016 01:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:40.016 01:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:40.016 01:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:40.016 01:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:40.016 01:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:40.016 01:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:40.016 01:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:40.016 01:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:40.016 01:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:40.016 01:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:40.016 [ 0]:0x2 00:12:40.016 01:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:40.016 01:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:40.016 01:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a7011c463ad04cb99d3d8484820e8b04 00:12:40.016 01:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a7011c463ad04cb99d3d8484820e8b04 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:40.016 01:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:40.016 01:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:40.276 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.276 01:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=816171 00:12:40.276 01:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:40.276 01:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:40.276 01:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 816171 /var/tmp/host.sock 00:12:40.276 01:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 816171 ']' 00:12:40.276 01:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:12:40.276 01:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:40.276 01:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:40.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:40.276 01:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:40.276 01:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:40.276 [2024-07-25 01:13:02.619426] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:12:40.276 [2024-07-25 01:13:02.619476] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid816171 ] 00:12:40.276 EAL: No free 2048 kB hugepages reported on node 1 00:12:40.276 [2024-07-25 01:13:02.676245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.276 [2024-07-25 01:13:02.751585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:41.216 01:13:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:41.216 01:13:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:41.216 01:13:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.216 01:13:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:41.476 01:13:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid be67d385-53bf-43d2-9cf8-e810dcb70b1b 00:12:41.476 01:13:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:41.476 01:13:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g BE67D38553BF43D29CF8E810DCB70B1B -i 00:12:41.476 01:13:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid d2610498-87fa-4995-bb90-932a9b3fbee4 00:12:41.476 01:13:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:41.476 01:13:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g D261049887FA4995BB90932A9B3FBEE4 -i 00:12:41.737 01:13:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:41.997 01:13:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:41.997 01:13:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:41.997 01:13:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:42.258 nvme0n1 00:12:42.258 01:13:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:42.258 01:13:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:42.828 nvme1n2 00:12:42.828 01:13:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:42.828 01:13:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:42.828 01:13:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:42.828 01:13:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:42.828 01:13:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:43.087 01:13:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:43.087 01:13:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:43.087 01:13:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:43.087 01:13:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:43.087 01:13:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ be67d385-53bf-43d2-9cf8-e810dcb70b1b == \b\e\6\7\d\3\8\5\-\5\3\b\f\-\4\3\d\2\-\9\c\f\8\-\e\8\1\0\d\c\b\7\0\b\1\b ]] 00:12:43.087 01:13:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:43.087 01:13:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:43.087 01:13:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:43.347 01:13:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ d2610498-87fa-4995-bb90-932a9b3fbee4 == \d\2\6\1\0\4\9\8\-\8\7\f\a\-\4\9\9\5\-\b\b\9\0\-\9\3\2\a\9\b\3\f\b\e\e\4 ]] 00:12:43.347 01:13:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 816171 00:12:43.347 01:13:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 816171 ']' 00:12:43.347 01:13:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 816171 00:12:43.347 01:13:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:43.347 01:13:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:43.347 01:13:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 816171 00:12:43.347 01:13:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:43.347 01:13:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:43.347 01:13:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 816171' 00:12:43.347 killing process with pid 816171 00:12:43.347 01:13:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 816171 00:12:43.347 01:13:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 816171 00:12:43.607 01:13:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.867 01:13:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:12:43.867 01:13:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:12:43.867 01:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:43.867 01:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:12:43.867 01:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:43.867 01:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:12:43.867 01:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:43.867 01:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:43.867 rmmod nvme_tcp 00:12:43.867 rmmod nvme_fabrics 00:12:43.867 rmmod nvme_keyring 00:12:43.867 01:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:43.867 01:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:12:43.867 01:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:12:43.867 01:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 814270 ']' 00:12:43.867 01:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 814270 00:12:43.867 01:13:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 814270 ']' 00:12:43.867 01:13:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 814270 00:12:43.867 01:13:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:43.867 01:13:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:43.867 01:13:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 814270 00:12:44.127 01:13:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:44.127 01:13:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:44.127 01:13:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 814270' 00:12:44.127 killing process with pid 814270 00:12:44.127 01:13:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 814270 00:12:44.127 01:13:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 814270 00:12:44.127 01:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:44.127 01:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:44.127 01:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:44.127 01:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:44.127 01:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:44.128 01:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.128 01:13:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:44.128 01:13:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.672 01:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:46.672 00:12:46.672 real 0m22.373s 00:12:46.672 user 0m24.242s 00:12:46.672 sys 0m5.873s 00:12:46.672 01:13:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:46.672 01:13:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:46.672 ************************************ 00:12:46.672 END TEST nvmf_ns_masking 00:12:46.672 ************************************ 00:12:46.672 01:13:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:46.672 01:13:08 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:12:46.672 01:13:08 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:46.672 01:13:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:46.672 01:13:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:46.672 01:13:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:46.672 ************************************ 00:12:46.672 START TEST nvmf_nvme_cli 00:12:46.672 ************************************ 00:12:46.672 01:13:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:46.672 * Looking for test storage... 00:12:46.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:46.672 01:13:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:46.672 01:13:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:46.672 01:13:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:46.672 01:13:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:46.672 01:13:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:46.672 01:13:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:46.672 01:13:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:46.672 01:13:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:46.672 01:13:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:46.672 01:13:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:46.672 01:13:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:46.672 01:13:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:46.672 01:13:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:46.672 01:13:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:46.672 01:13:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:46.672 01:13:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:46.672 01:13:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:46.672 01:13:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:46.672 01:13:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:46.672 01:13:08 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:46.672 01:13:08 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:46.672 01:13:08 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:46.673 01:13:08 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.673 01:13:08 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.673 01:13:08 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.673 01:13:08 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:46.673 01:13:08 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.673 01:13:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:12:46.673 01:13:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:46.673 01:13:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:46.673 01:13:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:46.673 01:13:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:46.673 01:13:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:46.673 01:13:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:46.673 01:13:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:46.673 01:13:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:46.673 01:13:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:46.673 01:13:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:46.673 01:13:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:46.673 01:13:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:46.673 01:13:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:46.673 01:13:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:46.673 01:13:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:46.673 01:13:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:46.673 01:13:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:46.673 01:13:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.673 01:13:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:46.673 01:13:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.673 01:13:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:46.673 01:13:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:46.673 01:13:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:12:46.673 01:13:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:51.997 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:51.997 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:51.997 Found net devices under 0000:86:00.0: cvl_0_0 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:51.997 Found net devices under 0000:86:00.1: cvl_0_1 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:51.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:51.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:12:51.997 00:12:51.997 --- 10.0.0.2 ping statistics --- 00:12:51.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.997 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:51.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:51.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.425 ms 00:12:51.997 00:12:51.997 --- 10.0.0.1 ping statistics --- 00:12:51.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.997 rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:51.997 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:51.998 01:13:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:51.998 01:13:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:51.998 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=820354 00:12:51.998 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 820354 00:12:51.998 01:13:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:51.998 01:13:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 820354 ']' 00:12:51.998 01:13:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.998 01:13:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:51.998 01:13:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.998 01:13:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:51.998 01:13:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:51.998 [2024-07-25 01:13:14.414188] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:12:51.998 [2024-07-25 01:13:14.414233] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.998 EAL: No free 2048 kB hugepages reported on node 1 00:12:51.998 [2024-07-25 01:13:14.472076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:52.258 [2024-07-25 01:13:14.555403] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:52.258 [2024-07-25 01:13:14.555439] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:52.258 [2024-07-25 01:13:14.555446] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:52.258 [2024-07-25 01:13:14.555452] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:52.258 [2024-07-25 01:13:14.555458] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:52.258 [2024-07-25 01:13:14.555500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.258 [2024-07-25 01:13:14.555597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:52.258 [2024-07-25 01:13:14.555659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:52.258 [2024-07-25 01:13:14.555660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.828 01:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:52.828 01:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:12:52.828 01:13:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:52.828 01:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:52.828 01:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:52.828 01:13:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:52.828 01:13:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:52.828 01:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.828 01:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:52.828 [2024-07-25 01:13:15.274991] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:52.828 01:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.828 01:13:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:52.828 01:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.828 01:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:52.828 Malloc0 00:12:52.828 01:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.828 01:13:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:52.828 01:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.828 01:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:53.087 Malloc1 00:12:53.087 01:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.087 01:13:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:53.087 01:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.087 01:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:53.087 01:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.087 01:13:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:53.087 01:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.087 01:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:53.087 01:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.087 01:13:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:53.087 01:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.087 01:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:53.087 01:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.087 01:13:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:53.087 01:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.087 01:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:53.087 [2024-07-25 01:13:15.356612] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.087 01:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.087 01:13:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:53.087 01:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.087 01:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:53.087 01:13:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.087 01:13:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:12:53.087 00:12:53.087 Discovery Log Number of Records 2, Generation counter 2 00:12:53.087 =====Discovery Log Entry 0====== 00:12:53.087 trtype: tcp 00:12:53.087 adrfam: ipv4 00:12:53.087 subtype: current discovery subsystem 00:12:53.087 treq: not required 00:12:53.087 portid: 0 00:12:53.087 trsvcid: 4420 00:12:53.087 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:53.087 traddr: 10.0.0.2 00:12:53.087 eflags: explicit discovery connections, duplicate discovery information 00:12:53.087 sectype: none 00:12:53.087 =====Discovery Log Entry 1====== 00:12:53.087 trtype: tcp 00:12:53.087 adrfam: ipv4 00:12:53.087 subtype: nvme subsystem 00:12:53.087 treq: not required 00:12:53.087 portid: 0 00:12:53.087 trsvcid: 4420 00:12:53.087 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:53.087 traddr: 10.0.0.2 00:12:53.087 eflags: none 00:12:53.087 sectype: none 00:12:53.087 01:13:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:53.087 01:13:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:53.087 01:13:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:53.087 01:13:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:53.088 01:13:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:53.088 01:13:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:53.088 01:13:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:53.088 01:13:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:53.088 01:13:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:53.088 01:13:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:53.088 01:13:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:54.467 01:13:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:54.467 01:13:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:12:54.467 01:13:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:54.467 01:13:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:54.467 01:13:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:54.467 01:13:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:12:56.374 01:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:56.374 01:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:56.374 01:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:56.374 01:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:56.374 01:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:56.374 01:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:12:56.374 01:13:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:56.374 01:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:56.374 01:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:56.374 01:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:56.374 01:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:56.374 01:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:56.374 01:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:56.374 01:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:56.374 01:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:56.374 01:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:56.374 01:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:56.374 01:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:56.374 01:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:56.374 01:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:56.374 01:13:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:56.374 /dev/nvme0n1 ]] 00:12:56.374 01:13:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:56.374 01:13:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:56.374 01:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:56.374 01:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:56.374 01:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:56.374 01:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:56.374 01:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:56.374 01:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:56.374 01:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:56.374 01:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:56.374 01:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:56.374 01:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:56.374 01:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:56.374 01:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:56.375 01:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:56.375 01:13:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:56.375 01:13:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:56.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.375 01:13:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:56.375 01:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:12:56.375 01:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:56.375 01:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.375 01:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:56.375 01:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.375 01:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:12:56.375 01:13:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:56.375 01:13:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:56.375 01:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.375 01:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:56.375 01:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.375 01:13:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:56.375 01:13:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:56.375 01:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:56.375 01:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:12:56.375 01:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:56.375 01:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:12:56.375 01:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:56.375 01:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:56.375 rmmod nvme_tcp 00:12:56.375 rmmod nvme_fabrics 00:12:56.375 rmmod nvme_keyring 00:12:56.634 01:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:56.634 01:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:12:56.634 01:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:12:56.634 01:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 820354 ']' 00:12:56.634 01:13:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 820354 00:12:56.634 01:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 820354 ']' 00:12:56.634 01:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 820354 00:12:56.634 01:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:12:56.634 01:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:56.634 01:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 820354 00:12:56.634 01:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:56.634 01:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:56.634 01:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 820354' 00:12:56.634 killing process with pid 820354 00:12:56.634 01:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 820354 00:12:56.634 01:13:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 820354 00:12:56.894 01:13:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:56.894 01:13:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:56.894 01:13:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:56.894 01:13:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:56.894 01:13:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:56.894 01:13:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.894 01:13:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:56.894 01:13:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.803 01:13:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:58.803 00:12:58.803 real 0m12.493s 00:12:58.803 user 0m19.915s 00:12:58.803 sys 0m4.662s 00:12:58.803 01:13:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:58.803 01:13:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:58.803 ************************************ 00:12:58.803 END TEST nvmf_nvme_cli 00:12:58.803 ************************************ 00:12:58.803 01:13:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:58.803 01:13:21 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:12:58.803 01:13:21 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:58.803 01:13:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:58.803 01:13:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:58.803 01:13:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:58.803 ************************************ 00:12:58.803 START TEST nvmf_vfio_user 00:12:58.803 ************************************ 00:12:58.803 01:13:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:59.063 * Looking for test storage... 00:12:59.063 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:59.063 01:13:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:59.063 01:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:59.063 01:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:59.063 01:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:59.063 01:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:59.063 01:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:59.063 01:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:59.063 01:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:59.063 01:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:59.063 01:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:59.063 01:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:59.063 01:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:59.063 01:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:59.063 01:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:59.063 01:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:59.063 01:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:59.063 01:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:59.063 01:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:59.063 01:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:59.063 01:13:21 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:59.063 01:13:21 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:59.063 01:13:21 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:59.063 01:13:21 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.063 01:13:21 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.063 01:13:21 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.063 01:13:21 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:59.063 01:13:21 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.063 01:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:12:59.063 01:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:59.063 01:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:59.063 01:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:59.063 01:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:59.063 01:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:59.063 01:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:59.063 01:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:59.063 01:13:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:59.063 01:13:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:59.063 01:13:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:59.063 01:13:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:59.063 01:13:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:59.063 01:13:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:59.063 01:13:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:59.063 01:13:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:59.063 01:13:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:59.064 01:13:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:59.064 01:13:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:59.064 01:13:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=821643 00:12:59.064 01:13:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 821643' 00:12:59.064 Process pid: 821643 00:12:59.064 01:13:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:59.064 01:13:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 821643 00:12:59.064 01:13:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:59.064 01:13:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 821643 ']' 00:12:59.064 01:13:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.064 01:13:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:59.064 01:13:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.064 01:13:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:59.064 01:13:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:59.064 [2024-07-25 01:13:21.427989] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:12:59.064 [2024-07-25 01:13:21.428036] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.064 EAL: No free 2048 kB hugepages reported on node 1 00:12:59.064 [2024-07-25 01:13:21.482880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:59.324 [2024-07-25 01:13:21.558046] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:59.324 [2024-07-25 01:13:21.558091] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:59.324 [2024-07-25 01:13:21.558098] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:59.324 [2024-07-25 01:13:21.558103] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:59.324 [2024-07-25 01:13:21.558108] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:59.324 [2024-07-25 01:13:21.558175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.324 [2024-07-25 01:13:21.558270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:59.324 [2024-07-25 01:13:21.558356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:59.324 [2024-07-25 01:13:21.558357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.893 01:13:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:59.893 01:13:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:12:59.893 01:13:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:00.830 01:13:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:01.105 01:13:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:01.105 01:13:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:01.105 01:13:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:01.105 01:13:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:01.105 01:13:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:01.403 Malloc1 00:13:01.403 01:13:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:01.403 01:13:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:01.663 01:13:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:01.922 01:13:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:01.922 01:13:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:01.922 01:13:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:01.922 Malloc2 00:13:01.923 01:13:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:02.182 01:13:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:02.442 01:13:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:02.704 01:13:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:02.704 01:13:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:02.704 01:13:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:02.704 01:13:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:02.704 01:13:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:02.704 01:13:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:02.704 [2024-07-25 01:13:24.990628] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:13:02.704 [2024-07-25 01:13:24.990659] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid822226 ] 00:13:02.704 EAL: No free 2048 kB hugepages reported on node 1 00:13:02.704 [2024-07-25 01:13:25.019555] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:02.704 [2024-07-25 01:13:25.021855] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:02.704 [2024-07-25 01:13:25.021873] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fb3b1457000 00:13:02.704 [2024-07-25 01:13:25.022855] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:02.704 [2024-07-25 01:13:25.023853] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:02.704 [2024-07-25 01:13:25.024861] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:02.704 [2024-07-25 01:13:25.025868] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:02.704 [2024-07-25 01:13:25.026877] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:02.704 [2024-07-25 01:13:25.027878] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:02.704 [2024-07-25 01:13:25.028883] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:02.704 [2024-07-25 01:13:25.029889] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:02.704 [2024-07-25 01:13:25.030894] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:02.704 [2024-07-25 01:13:25.030903] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fb3b144c000 00:13:02.704 [2024-07-25 01:13:25.031844] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:02.704 [2024-07-25 01:13:25.044463] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:02.704 [2024-07-25 01:13:25.044485] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:13:02.704 [2024-07-25 01:13:25.047002] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:02.704 [2024-07-25 01:13:25.047040] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:02.704 [2024-07-25 01:13:25.047111] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:13:02.704 [2024-07-25 01:13:25.047127] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:13:02.704 [2024-07-25 01:13:25.047132] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:13:02.704 [2024-07-25 01:13:25.048000] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:02.704 [2024-07-25 01:13:25.048012] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:13:02.704 [2024-07-25 01:13:25.048018] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:13:02.704 [2024-07-25 01:13:25.049008] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:02.704 [2024-07-25 01:13:25.049016] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:13:02.704 [2024-07-25 01:13:25.049023] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:13:02.704 [2024-07-25 01:13:25.050012] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:02.704 [2024-07-25 01:13:25.050021] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:02.704 [2024-07-25 01:13:25.051013] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:02.704 [2024-07-25 01:13:25.051021] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:13:02.704 [2024-07-25 01:13:25.051025] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:13:02.704 [2024-07-25 01:13:25.051031] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:02.704 [2024-07-25 01:13:25.051136] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:13:02.704 [2024-07-25 01:13:25.051142] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:02.705 [2024-07-25 01:13:25.051146] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:02.705 [2024-07-25 01:13:25.055048] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:02.705 [2024-07-25 01:13:25.056032] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:02.705 [2024-07-25 01:13:25.057048] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:02.705 [2024-07-25 01:13:25.058041] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:02.705 [2024-07-25 01:13:25.058126] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:02.705 [2024-07-25 01:13:25.059056] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:02.705 [2024-07-25 01:13:25.059063] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:02.705 [2024-07-25 01:13:25.059068] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:13:02.705 [2024-07-25 01:13:25.059085] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:13:02.705 [2024-07-25 01:13:25.059095] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:13:02.705 [2024-07-25 01:13:25.059112] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:02.705 [2024-07-25 01:13:25.059118] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:02.705 [2024-07-25 01:13:25.059130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:02.705 [2024-07-25 01:13:25.059170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:02.705 [2024-07-25 01:13:25.059178] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:13:02.705 [2024-07-25 01:13:25.059184] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:13:02.705 [2024-07-25 01:13:25.059188] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:13:02.705 [2024-07-25 01:13:25.059192] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:02.705 [2024-07-25 01:13:25.059196] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:13:02.705 [2024-07-25 01:13:25.059200] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:13:02.705 [2024-07-25 01:13:25.059204] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:13:02.705 [2024-07-25 01:13:25.059211] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:13:02.705 [2024-07-25 01:13:25.059220] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:02.705 [2024-07-25 01:13:25.059230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:02.705 [2024-07-25 01:13:25.059243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:02.705 [2024-07-25 01:13:25.059251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:02.705 [2024-07-25 01:13:25.059259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:02.705 [2024-07-25 01:13:25.059266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:02.705 [2024-07-25 01:13:25.059271] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:13:02.705 [2024-07-25 01:13:25.059278] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:02.705 [2024-07-25 01:13:25.059287] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:02.705 [2024-07-25 01:13:25.059295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:02.705 [2024-07-25 01:13:25.059300] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:13:02.705 [2024-07-25 01:13:25.059305] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:02.705 [2024-07-25 01:13:25.059310] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:13:02.705 [2024-07-25 01:13:25.059315] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:13:02.705 [2024-07-25 01:13:25.059325] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:02.705 [2024-07-25 01:13:25.059339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:02.705 [2024-07-25 01:13:25.059388] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:13:02.705 [2024-07-25 01:13:25.059395] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:13:02.705 [2024-07-25 01:13:25.059401] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:02.705 [2024-07-25 01:13:25.059405] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:02.705 [2024-07-25 01:13:25.059411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:02.705 [2024-07-25 01:13:25.059424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:02.705 [2024-07-25 01:13:25.059431] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:13:02.705 [2024-07-25 01:13:25.059439] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:13:02.705 [2024-07-25 01:13:25.059446] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:13:02.705 [2024-07-25 01:13:25.059451] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:02.705 [2024-07-25 01:13:25.059455] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:02.705 [2024-07-25 01:13:25.059461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:02.705 [2024-07-25 01:13:25.059479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:02.705 [2024-07-25 01:13:25.059490] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:02.705 [2024-07-25 01:13:25.059496] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:02.705 [2024-07-25 01:13:25.059502] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:02.705 [2024-07-25 01:13:25.059506] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:02.705 [2024-07-25 01:13:25.059512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:02.705 [2024-07-25 01:13:25.059523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:02.705 [2024-07-25 01:13:25.059531] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:02.705 [2024-07-25 01:13:25.059536] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:13:02.705 [2024-07-25 01:13:25.059543] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:13:02.705 [2024-07-25 01:13:25.059548] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:13:02.705 [2024-07-25 01:13:25.059554] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:02.705 [2024-07-25 01:13:25.059559] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:13:02.705 [2024-07-25 01:13:25.059563] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:13:02.705 [2024-07-25 01:13:25.059566] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:13:02.705 [2024-07-25 01:13:25.059571] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:13:02.705 [2024-07-25 01:13:25.059588] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:02.705 [2024-07-25 01:13:25.059597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:02.705 [2024-07-25 01:13:25.059607] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:02.705 [2024-07-25 01:13:25.059615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:02.706 [2024-07-25 01:13:25.059625] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:02.706 [2024-07-25 01:13:25.059636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:02.706 [2024-07-25 01:13:25.059646] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:02.706 [2024-07-25 01:13:25.059657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:02.706 [2024-07-25 01:13:25.059669] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:02.706 [2024-07-25 01:13:25.059673] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:02.706 [2024-07-25 01:13:25.059676] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:02.706 [2024-07-25 01:13:25.059680] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:02.706 [2024-07-25 01:13:25.059685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:02.706 [2024-07-25 01:13:25.059691] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:02.706 [2024-07-25 01:13:25.059695] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:02.706 [2024-07-25 01:13:25.059701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:02.706 [2024-07-25 01:13:25.059707] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:02.706 [2024-07-25 01:13:25.059711] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:02.706 [2024-07-25 01:13:25.059716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:02.706 [2024-07-25 01:13:25.059722] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:02.706 [2024-07-25 01:13:25.059726] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:02.706 [2024-07-25 01:13:25.059731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:02.706 [2024-07-25 01:13:25.059739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:02.706 [2024-07-25 01:13:25.059749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:02.706 [2024-07-25 01:13:25.059759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:02.706 [2024-07-25 01:13:25.059765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:02.706 ===================================================== 00:13:02.706 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:02.706 ===================================================== 00:13:02.706 Controller Capabilities/Features 00:13:02.706 ================================ 00:13:02.706 Vendor ID: 4e58 00:13:02.706 Subsystem Vendor ID: 4e58 00:13:02.706 Serial Number: SPDK1 00:13:02.706 Model Number: SPDK bdev Controller 00:13:02.706 Firmware Version: 24.09 00:13:02.706 Recommended Arb Burst: 6 00:13:02.706 IEEE OUI Identifier: 8d 6b 50 00:13:02.706 Multi-path I/O 00:13:02.706 May have multiple subsystem ports: Yes 00:13:02.706 May have multiple controllers: Yes 00:13:02.706 Associated with SR-IOV VF: No 00:13:02.706 Max Data Transfer Size: 131072 00:13:02.706 Max Number of Namespaces: 32 00:13:02.706 Max Number of I/O Queues: 127 00:13:02.706 NVMe Specification Version (VS): 1.3 00:13:02.706 NVMe Specification Version (Identify): 1.3 00:13:02.706 Maximum Queue Entries: 256 00:13:02.706 Contiguous Queues Required: Yes 00:13:02.706 Arbitration Mechanisms Supported 00:13:02.706 Weighted Round Robin: Not Supported 00:13:02.706 Vendor Specific: Not Supported 00:13:02.706 Reset Timeout: 15000 ms 00:13:02.706 Doorbell Stride: 4 bytes 00:13:02.706 NVM Subsystem Reset: Not Supported 00:13:02.706 Command Sets Supported 00:13:02.706 NVM Command Set: Supported 00:13:02.706 Boot Partition: Not Supported 00:13:02.706 Memory Page Size Minimum: 4096 bytes 00:13:02.706 Memory Page Size Maximum: 4096 bytes 00:13:02.706 Persistent Memory Region: Not Supported 00:13:02.706 Optional Asynchronous Events Supported 00:13:02.706 Namespace Attribute Notices: Supported 00:13:02.706 Firmware Activation Notices: Not Supported 00:13:02.706 ANA Change Notices: Not Supported 00:13:02.706 PLE Aggregate Log Change Notices: Not Supported 00:13:02.706 LBA Status Info Alert Notices: Not Supported 00:13:02.706 EGE Aggregate Log Change Notices: Not Supported 00:13:02.706 Normal NVM Subsystem Shutdown event: Not Supported 00:13:02.706 Zone Descriptor Change Notices: Not Supported 00:13:02.706 Discovery Log Change Notices: Not Supported 00:13:02.706 Controller Attributes 00:13:02.706 128-bit Host Identifier: Supported 00:13:02.706 Non-Operational Permissive Mode: Not Supported 00:13:02.706 NVM Sets: Not Supported 00:13:02.706 Read Recovery Levels: Not Supported 00:13:02.706 Endurance Groups: Not Supported 00:13:02.706 Predictable Latency Mode: Not Supported 00:13:02.706 Traffic Based Keep ALive: Not Supported 00:13:02.706 Namespace Granularity: Not Supported 00:13:02.706 SQ Associations: Not Supported 00:13:02.706 UUID List: Not Supported 00:13:02.706 Multi-Domain Subsystem: Not Supported 00:13:02.706 Fixed Capacity Management: Not Supported 00:13:02.706 Variable Capacity Management: Not Supported 00:13:02.706 Delete Endurance Group: Not Supported 00:13:02.706 Delete NVM Set: Not Supported 00:13:02.706 Extended LBA Formats Supported: Not Supported 00:13:02.706 Flexible Data Placement Supported: Not Supported 00:13:02.706 00:13:02.706 Controller Memory Buffer Support 00:13:02.706 ================================ 00:13:02.706 Supported: No 00:13:02.706 00:13:02.706 Persistent Memory Region Support 00:13:02.706 ================================ 00:13:02.706 Supported: No 00:13:02.706 00:13:02.706 Admin Command Set Attributes 00:13:02.706 ============================ 00:13:02.706 Security Send/Receive: Not Supported 00:13:02.706 Format NVM: Not Supported 00:13:02.706 Firmware Activate/Download: Not Supported 00:13:02.706 Namespace Management: Not Supported 00:13:02.706 Device Self-Test: Not Supported 00:13:02.706 Directives: Not Supported 00:13:02.706 NVMe-MI: Not Supported 00:13:02.706 Virtualization Management: Not Supported 00:13:02.706 Doorbell Buffer Config: Not Supported 00:13:02.706 Get LBA Status Capability: Not Supported 00:13:02.706 Command & Feature Lockdown Capability: Not Supported 00:13:02.706 Abort Command Limit: 4 00:13:02.706 Async Event Request Limit: 4 00:13:02.706 Number of Firmware Slots: N/A 00:13:02.706 Firmware Slot 1 Read-Only: N/A 00:13:02.706 Firmware Activation Without Reset: N/A 00:13:02.706 Multiple Update Detection Support: N/A 00:13:02.706 Firmware Update Granularity: No Information Provided 00:13:02.706 Per-Namespace SMART Log: No 00:13:02.706 Asymmetric Namespace Access Log Page: Not Supported 00:13:02.706 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:02.706 Command Effects Log Page: Supported 00:13:02.706 Get Log Page Extended Data: Supported 00:13:02.706 Telemetry Log Pages: Not Supported 00:13:02.707 Persistent Event Log Pages: Not Supported 00:13:02.707 Supported Log Pages Log Page: May Support 00:13:02.707 Commands Supported & Effects Log Page: Not Supported 00:13:02.707 Feature Identifiers & Effects Log Page:May Support 00:13:02.707 NVMe-MI Commands & Effects Log Page: May Support 00:13:02.707 Data Area 4 for Telemetry Log: Not Supported 00:13:02.707 Error Log Page Entries Supported: 128 00:13:02.707 Keep Alive: Supported 00:13:02.707 Keep Alive Granularity: 10000 ms 00:13:02.707 00:13:02.707 NVM Command Set Attributes 00:13:02.707 ========================== 00:13:02.707 Submission Queue Entry Size 00:13:02.707 Max: 64 00:13:02.707 Min: 64 00:13:02.707 Completion Queue Entry Size 00:13:02.707 Max: 16 00:13:02.707 Min: 16 00:13:02.707 Number of Namespaces: 32 00:13:02.707 Compare Command: Supported 00:13:02.707 Write Uncorrectable Command: Not Supported 00:13:02.707 Dataset Management Command: Supported 00:13:02.707 Write Zeroes Command: Supported 00:13:02.707 Set Features Save Field: Not Supported 00:13:02.707 Reservations: Not Supported 00:13:02.707 Timestamp: Not Supported 00:13:02.707 Copy: Supported 00:13:02.707 Volatile Write Cache: Present 00:13:02.707 Atomic Write Unit (Normal): 1 00:13:02.707 Atomic Write Unit (PFail): 1 00:13:02.707 Atomic Compare & Write Unit: 1 00:13:02.707 Fused Compare & Write: Supported 00:13:02.707 Scatter-Gather List 00:13:02.707 SGL Command Set: Supported (Dword aligned) 00:13:02.707 SGL Keyed: Not Supported 00:13:02.707 SGL Bit Bucket Descriptor: Not Supported 00:13:02.707 SGL Metadata Pointer: Not Supported 00:13:02.707 Oversized SGL: Not Supported 00:13:02.707 SGL Metadata Address: Not Supported 00:13:02.707 SGL Offset: Not Supported 00:13:02.707 Transport SGL Data Block: Not Supported 00:13:02.707 Replay Protected Memory Block: Not Supported 00:13:02.707 00:13:02.707 Firmware Slot Information 00:13:02.707 ========================= 00:13:02.707 Active slot: 1 00:13:02.707 Slot 1 Firmware Revision: 24.09 00:13:02.707 00:13:02.707 00:13:02.707 Commands Supported and Effects 00:13:02.707 ============================== 00:13:02.707 Admin Commands 00:13:02.707 -------------- 00:13:02.707 Get Log Page (02h): Supported 00:13:02.707 Identify (06h): Supported 00:13:02.707 Abort (08h): Supported 00:13:02.707 Set Features (09h): Supported 00:13:02.707 Get Features (0Ah): Supported 00:13:02.707 Asynchronous Event Request (0Ch): Supported 00:13:02.707 Keep Alive (18h): Supported 00:13:02.707 I/O Commands 00:13:02.707 ------------ 00:13:02.707 Flush (00h): Supported LBA-Change 00:13:02.707 Write (01h): Supported LBA-Change 00:13:02.707 Read (02h): Supported 00:13:02.707 Compare (05h): Supported 00:13:02.707 Write Zeroes (08h): Supported LBA-Change 00:13:02.707 Dataset Management (09h): Supported LBA-Change 00:13:02.707 Copy (19h): Supported LBA-Change 00:13:02.707 00:13:02.707 Error Log 00:13:02.707 ========= 00:13:02.707 00:13:02.707 Arbitration 00:13:02.707 =========== 00:13:02.707 Arbitration Burst: 1 00:13:02.707 00:13:02.707 Power Management 00:13:02.707 ================ 00:13:02.707 Number of Power States: 1 00:13:02.707 Current Power State: Power State #0 00:13:02.707 Power State #0: 00:13:02.707 Max Power: 0.00 W 00:13:02.707 Non-Operational State: Operational 00:13:02.707 Entry Latency: Not Reported 00:13:02.707 Exit Latency: Not Reported 00:13:02.707 Relative Read Throughput: 0 00:13:02.707 Relative Read Latency: 0 00:13:02.707 Relative Write Throughput: 0 00:13:02.707 Relative Write Latency: 0 00:13:02.707 Idle Power: Not Reported 00:13:02.707 Active Power: Not Reported 00:13:02.707 Non-Operational Permissive Mode: Not Supported 00:13:02.707 00:13:02.707 Health Information 00:13:02.707 ================== 00:13:02.707 Critical Warnings: 00:13:02.707 Available Spare Space: OK 00:13:02.707 Temperature: OK 00:13:02.707 Device Reliability: OK 00:13:02.707 Read Only: No 00:13:02.707 Volatile Memory Backup: OK 00:13:02.707 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:02.707 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:02.707 Available Spare: 0% 00:13:02.707 Available Sp[2024-07-25 01:13:25.059853] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:02.707 [2024-07-25 01:13:25.059864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:02.707 [2024-07-25 01:13:25.059889] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:13:02.707 [2024-07-25 01:13:25.059898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:02.707 [2024-07-25 01:13:25.059903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:02.707 [2024-07-25 01:13:25.059909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:02.707 [2024-07-25 01:13:25.059914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:02.707 [2024-07-25 01:13:25.060062] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:02.707 [2024-07-25 01:13:25.060072] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:02.707 [2024-07-25 01:13:25.061067] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:02.707 [2024-07-25 01:13:25.061114] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:13:02.707 [2024-07-25 01:13:25.061121] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:13:02.707 [2024-07-25 01:13:25.062072] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:02.707 [2024-07-25 01:13:25.062082] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:13:02.707 [2024-07-25 01:13:25.062128] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:02.707 [2024-07-25 01:13:25.064106] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:02.707 are Threshold: 0% 00:13:02.707 Life Percentage Used: 0% 00:13:02.707 Data Units Read: 0 00:13:02.707 Data Units Written: 0 00:13:02.707 Host Read Commands: 0 00:13:02.707 Host Write Commands: 0 00:13:02.707 Controller Busy Time: 0 minutes 00:13:02.707 Power Cycles: 0 00:13:02.707 Power On Hours: 0 hours 00:13:02.707 Unsafe Shutdowns: 0 00:13:02.707 Unrecoverable Media Errors: 0 00:13:02.707 Lifetime Error Log Entries: 0 00:13:02.707 Warning Temperature Time: 0 minutes 00:13:02.707 Critical Temperature Time: 0 minutes 00:13:02.707 00:13:02.707 Number of Queues 00:13:02.707 ================ 00:13:02.707 Number of I/O Submission Queues: 127 00:13:02.707 Number of I/O Completion Queues: 127 00:13:02.707 00:13:02.707 Active Namespaces 00:13:02.707 ================= 00:13:02.707 Namespace ID:1 00:13:02.707 Error Recovery Timeout: Unlimited 00:13:02.707 Command Set Identifier: NVM (00h) 00:13:02.707 Deallocate: Supported 00:13:02.707 Deallocated/Unwritten Error: Not Supported 00:13:02.707 Deallocated Read Value: Unknown 00:13:02.707 Deallocate in Write Zeroes: Not Supported 00:13:02.707 Deallocated Guard Field: 0xFFFF 00:13:02.707 Flush: Supported 00:13:02.707 Reservation: Supported 00:13:02.707 Namespace Sharing Capabilities: Multiple Controllers 00:13:02.707 Size (in LBAs): 131072 (0GiB) 00:13:02.707 Capacity (in LBAs): 131072 (0GiB) 00:13:02.707 Utilization (in LBAs): 131072 (0GiB) 00:13:02.707 NGUID: 9C4834E1F26E49E5A8AC1CD180CA63A3 00:13:02.707 UUID: 9c4834e1-f26e-49e5-a8ac-1cd180ca63a3 00:13:02.707 Thin Provisioning: Not Supported 00:13:02.707 Per-NS Atomic Units: Yes 00:13:02.707 Atomic Boundary Size (Normal): 0 00:13:02.707 Atomic Boundary Size (PFail): 0 00:13:02.707 Atomic Boundary Offset: 0 00:13:02.707 Maximum Single Source Range Length: 65535 00:13:02.707 Maximum Copy Length: 65535 00:13:02.707 Maximum Source Range Count: 1 00:13:02.707 NGUID/EUI64 Never Reused: No 00:13:02.707 Namespace Write Protected: No 00:13:02.707 Number of LBA Formats: 1 00:13:02.707 Current LBA Format: LBA Format #00 00:13:02.707 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:02.707 00:13:02.707 01:13:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:02.708 EAL: No free 2048 kB hugepages reported on node 1 00:13:02.968 [2024-07-25 01:13:25.275809] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:08.249 Initializing NVMe Controllers 00:13:08.249 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:08.249 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:08.249 Initialization complete. Launching workers. 00:13:08.249 ======================================================== 00:13:08.249 Latency(us) 00:13:08.249 Device Information : IOPS MiB/s Average min max 00:13:08.249 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39893.68 155.83 3208.11 969.00 7661.77 00:13:08.249 ======================================================== 00:13:08.249 Total : 39893.68 155.83 3208.11 969.00 7661.77 00:13:08.249 00:13:08.249 [2024-07-25 01:13:30.293882] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:08.249 01:13:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:08.249 EAL: No free 2048 kB hugepages reported on node 1 00:13:08.249 [2024-07-25 01:13:30.513905] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:13.528 Initializing NVMe Controllers 00:13:13.528 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:13.528 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:13.528 Initialization complete. Launching workers. 00:13:13.528 ======================================================== 00:13:13.528 Latency(us) 00:13:13.528 Device Information : IOPS MiB/s Average min max 00:13:13.528 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16037.44 62.65 7980.63 5992.84 15467.04 00:13:13.528 ======================================================== 00:13:13.528 Total : 16037.44 62.65 7980.63 5992.84 15467.04 00:13:13.528 00:13:13.528 [2024-07-25 01:13:35.547271] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:13.528 01:13:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:13.528 EAL: No free 2048 kB hugepages reported on node 1 00:13:13.528 [2024-07-25 01:13:35.735142] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:18.812 [2024-07-25 01:13:40.813375] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:18.812 Initializing NVMe Controllers 00:13:18.812 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:18.812 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:18.812 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:18.812 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:18.812 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:18.812 Initialization complete. Launching workers. 00:13:18.812 Starting thread on core 2 00:13:18.812 Starting thread on core 3 00:13:18.812 Starting thread on core 1 00:13:18.812 01:13:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:18.812 EAL: No free 2048 kB hugepages reported on node 1 00:13:18.812 [2024-07-25 01:13:41.091535] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:22.106 [2024-07-25 01:13:44.151293] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:22.106 Initializing NVMe Controllers 00:13:22.106 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:22.106 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:22.106 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:22.106 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:22.106 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:22.106 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:22.106 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:22.106 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:22.106 Initialization complete. Launching workers. 00:13:22.106 Starting thread on core 1 with urgent priority queue 00:13:22.106 Starting thread on core 2 with urgent priority queue 00:13:22.106 Starting thread on core 3 with urgent priority queue 00:13:22.106 Starting thread on core 0 with urgent priority queue 00:13:22.106 SPDK bdev Controller (SPDK1 ) core 0: 8722.33 IO/s 11.46 secs/100000 ios 00:13:22.106 SPDK bdev Controller (SPDK1 ) core 1: 8865.00 IO/s 11.28 secs/100000 ios 00:13:22.106 SPDK bdev Controller (SPDK1 ) core 2: 7775.33 IO/s 12.86 secs/100000 ios 00:13:22.106 SPDK bdev Controller (SPDK1 ) core 3: 8801.33 IO/s 11.36 secs/100000 ios 00:13:22.106 ======================================================== 00:13:22.106 00:13:22.106 01:13:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:22.106 EAL: No free 2048 kB hugepages reported on node 1 00:13:22.106 [2024-07-25 01:13:44.416872] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:22.106 Initializing NVMe Controllers 00:13:22.106 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:22.106 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:22.106 Namespace ID: 1 size: 0GB 00:13:22.106 Initialization complete. 00:13:22.106 INFO: using host memory buffer for IO 00:13:22.106 Hello world! 00:13:22.106 [2024-07-25 01:13:44.451088] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:22.106 01:13:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:22.106 EAL: No free 2048 kB hugepages reported on node 1 00:13:22.366 [2024-07-25 01:13:44.716603] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:23.305 Initializing NVMe Controllers 00:13:23.305 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:23.305 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:23.305 Initialization complete. Launching workers. 00:13:23.305 submit (in ns) avg, min, max = 7902.4, 3281.7, 3997911.3 00:13:23.305 complete (in ns) avg, min, max = 21821.2, 1816.5, 4993460.9 00:13:23.305 00:13:23.305 Submit histogram 00:13:23.305 ================ 00:13:23.305 Range in us Cumulative Count 00:13:23.305 3.270 - 3.283: 0.0184% ( 3) 00:13:23.305 3.283 - 3.297: 0.3439% ( 53) 00:13:23.305 3.297 - 3.311: 0.9211% ( 94) 00:13:23.305 3.311 - 3.325: 1.6457% ( 118) 00:13:23.305 3.325 - 3.339: 3.1317% ( 242) 00:13:23.305 3.339 - 3.353: 7.0494% ( 638) 00:13:23.305 3.353 - 3.367: 12.4839% ( 885) 00:13:23.305 3.367 - 3.381: 18.3605% ( 957) 00:13:23.305 3.381 - 3.395: 24.4581% ( 993) 00:13:23.305 3.395 - 3.409: 30.4575% ( 977) 00:13:23.305 3.409 - 3.423: 35.8244% ( 874) 00:13:23.305 3.423 - 3.437: 41.4676% ( 919) 00:13:23.305 3.437 - 3.450: 46.5275% ( 824) 00:13:23.305 3.450 - 3.464: 50.7706% ( 691) 00:13:23.305 3.464 - 3.478: 55.1735% ( 717) 00:13:23.305 3.478 - 3.492: 60.9457% ( 940) 00:13:23.305 3.492 - 3.506: 67.3872% ( 1049) 00:13:23.305 3.506 - 3.520: 72.1830% ( 781) 00:13:23.305 3.520 - 3.534: 76.8683% ( 763) 00:13:23.305 3.534 - 3.548: 81.4185% ( 741) 00:13:23.305 3.548 - 3.562: 84.2125% ( 455) 00:13:23.305 3.562 - 3.590: 87.1170% ( 473) 00:13:23.305 3.590 - 3.617: 88.1179% ( 163) 00:13:23.305 3.617 - 3.645: 88.9776% ( 140) 00:13:23.305 3.645 - 3.673: 90.6417% ( 271) 00:13:23.305 3.673 - 3.701: 92.2751% ( 266) 00:13:23.305 3.701 - 3.729: 93.9146% ( 267) 00:13:23.305 3.729 - 3.757: 95.6033% ( 275) 00:13:23.305 3.757 - 3.784: 97.1999% ( 260) 00:13:23.305 3.784 - 3.812: 98.2499% ( 171) 00:13:23.305 3.812 - 3.840: 98.8640% ( 100) 00:13:23.305 3.840 - 3.868: 99.2631% ( 65) 00:13:23.305 3.868 - 3.896: 99.4903% ( 37) 00:13:23.305 3.896 - 3.923: 99.5702% ( 13) 00:13:23.305 3.923 - 3.951: 99.6009% ( 5) 00:13:23.305 3.951 - 3.979: 99.6070% ( 1) 00:13:23.305 4.925 - 4.953: 99.6131% ( 1) 00:13:23.305 4.953 - 4.981: 99.6193% ( 1) 00:13:23.305 5.176 - 5.203: 99.6254% ( 1) 00:13:23.305 5.482 - 5.510: 99.6316% ( 1) 00:13:23.305 5.510 - 5.537: 99.6377% ( 1) 00:13:23.305 5.565 - 5.593: 99.6438% ( 1) 00:13:23.305 5.593 - 5.621: 99.6500% ( 1) 00:13:23.305 5.649 - 5.677: 99.6561% ( 1) 00:13:23.305 5.677 - 5.704: 99.6623% ( 1) 00:13:23.305 5.732 - 5.760: 99.6684% ( 1) 00:13:23.305 5.760 - 5.788: 99.6807% ( 2) 00:13:23.305 5.816 - 5.843: 99.6930% ( 2) 00:13:23.305 5.927 - 5.955: 99.6991% ( 1) 00:13:23.305 5.983 - 6.010: 99.7114% ( 2) 00:13:23.305 6.038 - 6.066: 99.7175% ( 1) 00:13:23.305 6.066 - 6.094: 99.7298% ( 2) 00:13:23.305 6.094 - 6.122: 99.7360% ( 1) 00:13:23.305 6.122 - 6.150: 99.7421% ( 1) 00:13:23.305 6.150 - 6.177: 99.7544% ( 2) 00:13:23.305 6.177 - 6.205: 99.7667% ( 2) 00:13:23.305 6.205 - 6.233: 99.7728% ( 1) 00:13:23.305 6.233 - 6.261: 99.7851% ( 2) 00:13:23.305 6.372 - 6.400: 99.7912% ( 1) 00:13:23.305 6.428 - 6.456: 99.7974% ( 1) 00:13:23.305 6.511 - 6.539: 99.8035% ( 1) 00:13:23.305 6.539 - 6.567: 99.8096% ( 1) 00:13:23.305 6.678 - 6.706: 99.8158% ( 1) 00:13:23.305 6.706 - 6.734: 99.8219% ( 1) 00:13:23.305 6.734 - 6.762: 99.8281% ( 1) 00:13:23.305 6.901 - 6.929: 99.8342% ( 1) 00:13:23.305 7.123 - 7.179: 99.8403% ( 1) 00:13:23.305 7.235 - 7.290: 99.8465% ( 1) 00:13:23.305 7.457 - 7.513: 99.8588% ( 2) 00:13:23.305 7.736 - 7.791: 99.8710% ( 2) 00:13:23.305 8.070 - 8.125: 99.8772% ( 1) 00:13:23.305 8.459 - 8.515: 99.8833% ( 1) 00:13:23.305 11.297 - 11.353: 99.8895% ( 1) 00:13:23.305 3989.148 - 4017.642: 100.0000% ( 18) 00:13:23.305 00:13:23.305 Complete histogram 00:13:23.305 ================== 00:13:23.305 Range in us Cumulative Count 00:13:23.305 1.809 - 1.823: 0.0737% ( 12) 00:13:23.305 1.823 - 1.837: 1.6211% ( 252) 00:13:23.305 1.837 - 1.850: 3.6045% ( 323) 00:13:23.305 1.850 - [2024-07-25 01:13:45.738371] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:23.305 1.864: 4.9800% ( 224) 00:13:23.305 1.864 - 1.878: 12.0663% ( 1154) 00:13:23.305 1.878 - 1.892: 60.4421% ( 7878) 00:13:23.305 1.892 - 1.906: 89.3215% ( 4703) 00:13:23.305 1.906 - 1.920: 94.0927% ( 777) 00:13:23.305 1.920 - 1.934: 96.1498% ( 335) 00:13:23.305 1.934 - 1.948: 96.7393% ( 96) 00:13:23.305 1.948 - 1.962: 97.7218% ( 160) 00:13:23.305 1.962 - 1.976: 98.7043% ( 160) 00:13:23.305 1.976 - 1.990: 99.0728% ( 60) 00:13:23.305 1.990 - 2.003: 99.1772% ( 17) 00:13:23.305 2.003 - 2.017: 99.2938% ( 19) 00:13:23.305 2.017 - 2.031: 99.3123% ( 3) 00:13:23.305 2.031 - 2.045: 99.3245% ( 2) 00:13:23.305 3.423 - 3.437: 99.3307% ( 1) 00:13:23.305 3.562 - 3.590: 99.3368% ( 1) 00:13:23.305 3.868 - 3.896: 99.3552% ( 3) 00:13:23.305 3.951 - 3.979: 99.3614% ( 1) 00:13:23.305 4.007 - 4.035: 99.3675% ( 1) 00:13:23.305 4.118 - 4.146: 99.3737% ( 1) 00:13:23.305 4.146 - 4.174: 99.3859% ( 2) 00:13:23.305 4.174 - 4.202: 99.3921% ( 1) 00:13:23.305 4.230 - 4.257: 99.4105% ( 3) 00:13:23.305 4.313 - 4.341: 99.4166% ( 1) 00:13:23.305 4.452 - 4.480: 99.4289% ( 2) 00:13:23.305 4.480 - 4.508: 99.4412% ( 2) 00:13:23.305 4.563 - 4.591: 99.4473% ( 1) 00:13:23.305 4.647 - 4.675: 99.4535% ( 1) 00:13:23.305 4.786 - 4.814: 99.4596% ( 1) 00:13:23.305 4.925 - 4.953: 99.4658% ( 1) 00:13:23.305 5.009 - 5.037: 99.4719% ( 1) 00:13:23.305 5.064 - 5.092: 99.4842% ( 2) 00:13:23.306 5.899 - 5.927: 99.4903% ( 1) 00:13:23.306 6.010 - 6.038: 99.4965% ( 1) 00:13:23.306 7.179 - 7.235: 99.5026% ( 1) 00:13:23.306 3989.148 - 4017.642: 99.9939% ( 80) 00:13:23.306 4986.435 - 5014.929: 100.0000% ( 1) 00:13:23.306 00:13:23.306 01:13:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:23.306 01:13:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:23.306 01:13:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:23.306 01:13:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:23.306 01:13:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:23.566 [ 00:13:23.566 { 00:13:23.566 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:23.566 "subtype": "Discovery", 00:13:23.566 "listen_addresses": [], 00:13:23.566 "allow_any_host": true, 00:13:23.566 "hosts": [] 00:13:23.566 }, 00:13:23.566 { 00:13:23.566 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:23.566 "subtype": "NVMe", 00:13:23.566 "listen_addresses": [ 00:13:23.566 { 00:13:23.566 "trtype": "VFIOUSER", 00:13:23.566 "adrfam": "IPv4", 00:13:23.566 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:23.566 "trsvcid": "0" 00:13:23.566 } 00:13:23.566 ], 00:13:23.566 "allow_any_host": true, 00:13:23.566 "hosts": [], 00:13:23.566 "serial_number": "SPDK1", 00:13:23.566 "model_number": "SPDK bdev Controller", 00:13:23.566 "max_namespaces": 32, 00:13:23.566 "min_cntlid": 1, 00:13:23.566 "max_cntlid": 65519, 00:13:23.566 "namespaces": [ 00:13:23.566 { 00:13:23.566 "nsid": 1, 00:13:23.566 "bdev_name": "Malloc1", 00:13:23.566 "name": "Malloc1", 00:13:23.566 "nguid": "9C4834E1F26E49E5A8AC1CD180CA63A3", 00:13:23.566 "uuid": "9c4834e1-f26e-49e5-a8ac-1cd180ca63a3" 00:13:23.566 } 00:13:23.566 ] 00:13:23.566 }, 00:13:23.566 { 00:13:23.566 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:23.566 "subtype": "NVMe", 00:13:23.566 "listen_addresses": [ 00:13:23.566 { 00:13:23.566 "trtype": "VFIOUSER", 00:13:23.566 "adrfam": "IPv4", 00:13:23.566 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:23.566 "trsvcid": "0" 00:13:23.566 } 00:13:23.566 ], 00:13:23.566 "allow_any_host": true, 00:13:23.566 "hosts": [], 00:13:23.566 "serial_number": "SPDK2", 00:13:23.566 "model_number": "SPDK bdev Controller", 00:13:23.566 "max_namespaces": 32, 00:13:23.566 "min_cntlid": 1, 00:13:23.566 "max_cntlid": 65519, 00:13:23.566 "namespaces": [ 00:13:23.566 { 00:13:23.566 "nsid": 1, 00:13:23.566 "bdev_name": "Malloc2", 00:13:23.566 "name": "Malloc2", 00:13:23.566 "nguid": "33796D9009E14F559249A0FBDBAFFD39", 00:13:23.566 "uuid": "33796d90-09e1-4f55-9249-a0fbdbaffd39" 00:13:23.566 } 00:13:23.566 ] 00:13:23.566 } 00:13:23.566 ] 00:13:23.566 01:13:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:23.566 01:13:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:23.566 01:13:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=825735 00:13:23.566 01:13:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:23.566 01:13:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:13:23.566 01:13:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:23.566 01:13:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:23.566 01:13:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:13:23.566 01:13:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:23.566 01:13:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:23.566 EAL: No free 2048 kB hugepages reported on node 1 00:13:23.825 [2024-07-25 01:13:46.100548] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:23.825 Malloc3 00:13:23.825 01:13:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:24.084 [2024-07-25 01:13:46.342285] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:24.084 01:13:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:24.084 Asynchronous Event Request test 00:13:24.084 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:24.084 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:24.084 Registering asynchronous event callbacks... 00:13:24.084 Starting namespace attribute notice tests for all controllers... 00:13:24.084 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:24.084 aer_cb - Changed Namespace 00:13:24.084 Cleaning up... 00:13:24.084 [ 00:13:24.084 { 00:13:24.084 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:24.084 "subtype": "Discovery", 00:13:24.084 "listen_addresses": [], 00:13:24.084 "allow_any_host": true, 00:13:24.084 "hosts": [] 00:13:24.084 }, 00:13:24.084 { 00:13:24.084 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:24.084 "subtype": "NVMe", 00:13:24.084 "listen_addresses": [ 00:13:24.084 { 00:13:24.084 "trtype": "VFIOUSER", 00:13:24.084 "adrfam": "IPv4", 00:13:24.084 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:24.084 "trsvcid": "0" 00:13:24.084 } 00:13:24.084 ], 00:13:24.084 "allow_any_host": true, 00:13:24.084 "hosts": [], 00:13:24.084 "serial_number": "SPDK1", 00:13:24.084 "model_number": "SPDK bdev Controller", 00:13:24.084 "max_namespaces": 32, 00:13:24.084 "min_cntlid": 1, 00:13:24.084 "max_cntlid": 65519, 00:13:24.084 "namespaces": [ 00:13:24.084 { 00:13:24.084 "nsid": 1, 00:13:24.084 "bdev_name": "Malloc1", 00:13:24.084 "name": "Malloc1", 00:13:24.084 "nguid": "9C4834E1F26E49E5A8AC1CD180CA63A3", 00:13:24.084 "uuid": "9c4834e1-f26e-49e5-a8ac-1cd180ca63a3" 00:13:24.084 }, 00:13:24.084 { 00:13:24.084 "nsid": 2, 00:13:24.084 "bdev_name": "Malloc3", 00:13:24.084 "name": "Malloc3", 00:13:24.084 "nguid": "E2CF0F8D59494DB2A9531F6FB4E5D7F9", 00:13:24.084 "uuid": "e2cf0f8d-5949-4db2-a953-1f6fb4e5d7f9" 00:13:24.084 } 00:13:24.084 ] 00:13:24.084 }, 00:13:24.084 { 00:13:24.084 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:24.084 "subtype": "NVMe", 00:13:24.084 "listen_addresses": [ 00:13:24.084 { 00:13:24.084 "trtype": "VFIOUSER", 00:13:24.084 "adrfam": "IPv4", 00:13:24.084 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:24.084 "trsvcid": "0" 00:13:24.084 } 00:13:24.084 ], 00:13:24.084 "allow_any_host": true, 00:13:24.084 "hosts": [], 00:13:24.084 "serial_number": "SPDK2", 00:13:24.084 "model_number": "SPDK bdev Controller", 00:13:24.084 "max_namespaces": 32, 00:13:24.084 "min_cntlid": 1, 00:13:24.084 "max_cntlid": 65519, 00:13:24.084 "namespaces": [ 00:13:24.084 { 00:13:24.084 "nsid": 1, 00:13:24.084 "bdev_name": "Malloc2", 00:13:24.084 "name": "Malloc2", 00:13:24.084 "nguid": "33796D9009E14F559249A0FBDBAFFD39", 00:13:24.084 "uuid": "33796d90-09e1-4f55-9249-a0fbdbaffd39" 00:13:24.084 } 00:13:24.084 ] 00:13:24.084 } 00:13:24.084 ] 00:13:24.084 01:13:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 825735 00:13:24.084 01:13:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:24.084 01:13:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:24.084 01:13:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:24.084 01:13:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:24.084 [2024-07-25 01:13:46.561879] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:13:24.084 [2024-07-25 01:13:46.561908] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid825819 ] 00:13:24.084 EAL: No free 2048 kB hugepages reported on node 1 00:13:24.344 [2024-07-25 01:13:46.589446] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:24.344 [2024-07-25 01:13:46.597264] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:24.344 [2024-07-25 01:13:46.597287] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fe2b52c3000 00:13:24.344 [2024-07-25 01:13:46.598264] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:24.344 [2024-07-25 01:13:46.599269] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:24.344 [2024-07-25 01:13:46.600274] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:24.344 [2024-07-25 01:13:46.601285] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:24.344 [2024-07-25 01:13:46.602297] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:24.344 [2024-07-25 01:13:46.603303] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:24.344 [2024-07-25 01:13:46.604314] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:24.344 [2024-07-25 01:13:46.605324] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:24.344 [2024-07-25 01:13:46.606336] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:24.344 [2024-07-25 01:13:46.606345] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fe2b52b8000 00:13:24.344 [2024-07-25 01:13:46.607287] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:24.344 [2024-07-25 01:13:46.616802] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:24.344 [2024-07-25 01:13:46.616823] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:13:24.344 [2024-07-25 01:13:46.621897] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:24.344 [2024-07-25 01:13:46.621937] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:24.344 [2024-07-25 01:13:46.622005] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:13:24.344 [2024-07-25 01:13:46.622024] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:13:24.345 [2024-07-25 01:13:46.622029] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:13:24.345 [2024-07-25 01:13:46.622902] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:24.345 [2024-07-25 01:13:46.622911] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:13:24.345 [2024-07-25 01:13:46.622917] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:13:24.345 [2024-07-25 01:13:46.623913] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:24.345 [2024-07-25 01:13:46.623921] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:13:24.345 [2024-07-25 01:13:46.623928] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:13:24.345 [2024-07-25 01:13:46.625024] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:24.345 [2024-07-25 01:13:46.625037] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:24.345 [2024-07-25 01:13:46.625989] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:24.345 [2024-07-25 01:13:46.625998] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:13:24.345 [2024-07-25 01:13:46.626002] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:13:24.345 [2024-07-25 01:13:46.626008] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:24.345 [2024-07-25 01:13:46.626114] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:13:24.345 [2024-07-25 01:13:46.626118] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:24.345 [2024-07-25 01:13:46.626123] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:24.345 [2024-07-25 01:13:46.627003] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:24.345 [2024-07-25 01:13:46.628009] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:24.345 [2024-07-25 01:13:46.629017] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:24.345 [2024-07-25 01:13:46.630016] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:24.345 [2024-07-25 01:13:46.630055] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:24.345 [2024-07-25 01:13:46.631030] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:24.345 [2024-07-25 01:13:46.631039] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:24.345 [2024-07-25 01:13:46.631046] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:13:24.345 [2024-07-25 01:13:46.631065] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:13:24.345 [2024-07-25 01:13:46.631072] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:13:24.345 [2024-07-25 01:13:46.631083] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:24.345 [2024-07-25 01:13:46.631087] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:24.345 [2024-07-25 01:13:46.631098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:24.345 [2024-07-25 01:13:46.639051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:24.345 [2024-07-25 01:13:46.639062] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:13:24.345 [2024-07-25 01:13:46.639069] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:13:24.345 [2024-07-25 01:13:46.639073] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:13:24.345 [2024-07-25 01:13:46.639077] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:24.345 [2024-07-25 01:13:46.639081] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:13:24.345 [2024-07-25 01:13:46.639085] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:13:24.345 [2024-07-25 01:13:46.639089] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:13:24.345 [2024-07-25 01:13:46.639096] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:13:24.345 [2024-07-25 01:13:46.639106] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:24.345 [2024-07-25 01:13:46.647047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:24.345 [2024-07-25 01:13:46.647062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:24.345 [2024-07-25 01:13:46.647070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:24.345 [2024-07-25 01:13:46.647078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:24.345 [2024-07-25 01:13:46.647085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:24.345 [2024-07-25 01:13:46.647089] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:13:24.345 [2024-07-25 01:13:46.647096] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:24.345 [2024-07-25 01:13:46.647105] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:24.345 [2024-07-25 01:13:46.655047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:24.345 [2024-07-25 01:13:46.655055] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:13:24.345 [2024-07-25 01:13:46.655062] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:24.345 [2024-07-25 01:13:46.655068] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:13:24.345 [2024-07-25 01:13:46.655073] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:13:24.345 [2024-07-25 01:13:46.655082] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:24.345 [2024-07-25 01:13:46.663049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:24.345 [2024-07-25 01:13:46.663100] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:13:24.345 [2024-07-25 01:13:46.663107] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:13:24.345 [2024-07-25 01:13:46.663114] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:24.345 [2024-07-25 01:13:46.663118] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:24.345 [2024-07-25 01:13:46.663125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:24.345 [2024-07-25 01:13:46.671056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:24.345 [2024-07-25 01:13:46.671067] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:13:24.345 [2024-07-25 01:13:46.671078] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:13:24.345 [2024-07-25 01:13:46.671085] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:13:24.345 [2024-07-25 01:13:46.671091] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:24.345 [2024-07-25 01:13:46.671095] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:24.345 [2024-07-25 01:13:46.671101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:24.345 [2024-07-25 01:13:46.679048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:24.345 [2024-07-25 01:13:46.679061] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:24.345 [2024-07-25 01:13:46.679068] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:24.345 [2024-07-25 01:13:46.679075] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:24.345 [2024-07-25 01:13:46.679079] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:24.345 [2024-07-25 01:13:46.679084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:24.345 [2024-07-25 01:13:46.687046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:24.345 [2024-07-25 01:13:46.687055] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:24.345 [2024-07-25 01:13:46.687062] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:13:24.345 [2024-07-25 01:13:46.687072] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:13:24.345 [2024-07-25 01:13:46.687077] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:13:24.346 [2024-07-25 01:13:46.687082] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:24.346 [2024-07-25 01:13:46.687086] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:13:24.346 [2024-07-25 01:13:46.687091] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:13:24.346 [2024-07-25 01:13:46.687095] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:13:24.346 [2024-07-25 01:13:46.687099] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:13:24.346 [2024-07-25 01:13:46.687116] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:24.346 [2024-07-25 01:13:46.695048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:24.346 [2024-07-25 01:13:46.695060] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:24.346 [2024-07-25 01:13:46.703049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:24.346 [2024-07-25 01:13:46.703061] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:24.346 [2024-07-25 01:13:46.711047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:24.346 [2024-07-25 01:13:46.711058] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:24.346 [2024-07-25 01:13:46.719047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:24.346 [2024-07-25 01:13:46.719063] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:24.346 [2024-07-25 01:13:46.719067] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:24.346 [2024-07-25 01:13:46.719070] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:24.346 [2024-07-25 01:13:46.719073] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:24.346 [2024-07-25 01:13:46.719079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:24.346 [2024-07-25 01:13:46.719085] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:24.346 [2024-07-25 01:13:46.719089] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:24.346 [2024-07-25 01:13:46.719095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:24.346 [2024-07-25 01:13:46.719101] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:24.346 [2024-07-25 01:13:46.719105] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:24.346 [2024-07-25 01:13:46.719110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:24.346 [2024-07-25 01:13:46.719118] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:24.346 [2024-07-25 01:13:46.719122] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:24.346 [2024-07-25 01:13:46.719128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:24.346 [2024-07-25 01:13:46.727047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:24.346 [2024-07-25 01:13:46.727060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:24.346 [2024-07-25 01:13:46.727070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:24.346 [2024-07-25 01:13:46.727076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:24.346 ===================================================== 00:13:24.346 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:24.346 ===================================================== 00:13:24.346 Controller Capabilities/Features 00:13:24.346 ================================ 00:13:24.346 Vendor ID: 4e58 00:13:24.346 Subsystem Vendor ID: 4e58 00:13:24.346 Serial Number: SPDK2 00:13:24.346 Model Number: SPDK bdev Controller 00:13:24.346 Firmware Version: 24.09 00:13:24.346 Recommended Arb Burst: 6 00:13:24.346 IEEE OUI Identifier: 8d 6b 50 00:13:24.346 Multi-path I/O 00:13:24.346 May have multiple subsystem ports: Yes 00:13:24.346 May have multiple controllers: Yes 00:13:24.346 Associated with SR-IOV VF: No 00:13:24.346 Max Data Transfer Size: 131072 00:13:24.346 Max Number of Namespaces: 32 00:13:24.346 Max Number of I/O Queues: 127 00:13:24.346 NVMe Specification Version (VS): 1.3 00:13:24.346 NVMe Specification Version (Identify): 1.3 00:13:24.346 Maximum Queue Entries: 256 00:13:24.346 Contiguous Queues Required: Yes 00:13:24.346 Arbitration Mechanisms Supported 00:13:24.346 Weighted Round Robin: Not Supported 00:13:24.346 Vendor Specific: Not Supported 00:13:24.346 Reset Timeout: 15000 ms 00:13:24.346 Doorbell Stride: 4 bytes 00:13:24.346 NVM Subsystem Reset: Not Supported 00:13:24.346 Command Sets Supported 00:13:24.346 NVM Command Set: Supported 00:13:24.346 Boot Partition: Not Supported 00:13:24.346 Memory Page Size Minimum: 4096 bytes 00:13:24.346 Memory Page Size Maximum: 4096 bytes 00:13:24.346 Persistent Memory Region: Not Supported 00:13:24.346 Optional Asynchronous Events Supported 00:13:24.346 Namespace Attribute Notices: Supported 00:13:24.346 Firmware Activation Notices: Not Supported 00:13:24.346 ANA Change Notices: Not Supported 00:13:24.346 PLE Aggregate Log Change Notices: Not Supported 00:13:24.346 LBA Status Info Alert Notices: Not Supported 00:13:24.346 EGE Aggregate Log Change Notices: Not Supported 00:13:24.346 Normal NVM Subsystem Shutdown event: Not Supported 00:13:24.346 Zone Descriptor Change Notices: Not Supported 00:13:24.346 Discovery Log Change Notices: Not Supported 00:13:24.346 Controller Attributes 00:13:24.346 128-bit Host Identifier: Supported 00:13:24.346 Non-Operational Permissive Mode: Not Supported 00:13:24.346 NVM Sets: Not Supported 00:13:24.346 Read Recovery Levels: Not Supported 00:13:24.346 Endurance Groups: Not Supported 00:13:24.346 Predictable Latency Mode: Not Supported 00:13:24.346 Traffic Based Keep ALive: Not Supported 00:13:24.346 Namespace Granularity: Not Supported 00:13:24.346 SQ Associations: Not Supported 00:13:24.346 UUID List: Not Supported 00:13:24.346 Multi-Domain Subsystem: Not Supported 00:13:24.346 Fixed Capacity Management: Not Supported 00:13:24.346 Variable Capacity Management: Not Supported 00:13:24.346 Delete Endurance Group: Not Supported 00:13:24.346 Delete NVM Set: Not Supported 00:13:24.346 Extended LBA Formats Supported: Not Supported 00:13:24.346 Flexible Data Placement Supported: Not Supported 00:13:24.346 00:13:24.346 Controller Memory Buffer Support 00:13:24.346 ================================ 00:13:24.346 Supported: No 00:13:24.346 00:13:24.346 Persistent Memory Region Support 00:13:24.346 ================================ 00:13:24.346 Supported: No 00:13:24.346 00:13:24.346 Admin Command Set Attributes 00:13:24.346 ============================ 00:13:24.346 Security Send/Receive: Not Supported 00:13:24.346 Format NVM: Not Supported 00:13:24.346 Firmware Activate/Download: Not Supported 00:13:24.346 Namespace Management: Not Supported 00:13:24.346 Device Self-Test: Not Supported 00:13:24.346 Directives: Not Supported 00:13:24.346 NVMe-MI: Not Supported 00:13:24.346 Virtualization Management: Not Supported 00:13:24.346 Doorbell Buffer Config: Not Supported 00:13:24.346 Get LBA Status Capability: Not Supported 00:13:24.346 Command & Feature Lockdown Capability: Not Supported 00:13:24.346 Abort Command Limit: 4 00:13:24.346 Async Event Request Limit: 4 00:13:24.346 Number of Firmware Slots: N/A 00:13:24.346 Firmware Slot 1 Read-Only: N/A 00:13:24.346 Firmware Activation Without Reset: N/A 00:13:24.346 Multiple Update Detection Support: N/A 00:13:24.346 Firmware Update Granularity: No Information Provided 00:13:24.346 Per-Namespace SMART Log: No 00:13:24.346 Asymmetric Namespace Access Log Page: Not Supported 00:13:24.346 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:24.346 Command Effects Log Page: Supported 00:13:24.346 Get Log Page Extended Data: Supported 00:13:24.346 Telemetry Log Pages: Not Supported 00:13:24.346 Persistent Event Log Pages: Not Supported 00:13:24.346 Supported Log Pages Log Page: May Support 00:13:24.346 Commands Supported & Effects Log Page: Not Supported 00:13:24.346 Feature Identifiers & Effects Log Page:May Support 00:13:24.346 NVMe-MI Commands & Effects Log Page: May Support 00:13:24.346 Data Area 4 for Telemetry Log: Not Supported 00:13:24.346 Error Log Page Entries Supported: 128 00:13:24.346 Keep Alive: Supported 00:13:24.346 Keep Alive Granularity: 10000 ms 00:13:24.346 00:13:24.346 NVM Command Set Attributes 00:13:24.346 ========================== 00:13:24.346 Submission Queue Entry Size 00:13:24.346 Max: 64 00:13:24.346 Min: 64 00:13:24.346 Completion Queue Entry Size 00:13:24.346 Max: 16 00:13:24.346 Min: 16 00:13:24.347 Number of Namespaces: 32 00:13:24.347 Compare Command: Supported 00:13:24.347 Write Uncorrectable Command: Not Supported 00:13:24.347 Dataset Management Command: Supported 00:13:24.347 Write Zeroes Command: Supported 00:13:24.347 Set Features Save Field: Not Supported 00:13:24.347 Reservations: Not Supported 00:13:24.347 Timestamp: Not Supported 00:13:24.347 Copy: Supported 00:13:24.347 Volatile Write Cache: Present 00:13:24.347 Atomic Write Unit (Normal): 1 00:13:24.347 Atomic Write Unit (PFail): 1 00:13:24.347 Atomic Compare & Write Unit: 1 00:13:24.347 Fused Compare & Write: Supported 00:13:24.347 Scatter-Gather List 00:13:24.347 SGL Command Set: Supported (Dword aligned) 00:13:24.347 SGL Keyed: Not Supported 00:13:24.347 SGL Bit Bucket Descriptor: Not Supported 00:13:24.347 SGL Metadata Pointer: Not Supported 00:13:24.347 Oversized SGL: Not Supported 00:13:24.347 SGL Metadata Address: Not Supported 00:13:24.347 SGL Offset: Not Supported 00:13:24.347 Transport SGL Data Block: Not Supported 00:13:24.347 Replay Protected Memory Block: Not Supported 00:13:24.347 00:13:24.347 Firmware Slot Information 00:13:24.347 ========================= 00:13:24.347 Active slot: 1 00:13:24.347 Slot 1 Firmware Revision: 24.09 00:13:24.347 00:13:24.347 00:13:24.347 Commands Supported and Effects 00:13:24.347 ============================== 00:13:24.347 Admin Commands 00:13:24.347 -------------- 00:13:24.347 Get Log Page (02h): Supported 00:13:24.347 Identify (06h): Supported 00:13:24.347 Abort (08h): Supported 00:13:24.347 Set Features (09h): Supported 00:13:24.347 Get Features (0Ah): Supported 00:13:24.347 Asynchronous Event Request (0Ch): Supported 00:13:24.347 Keep Alive (18h): Supported 00:13:24.347 I/O Commands 00:13:24.347 ------------ 00:13:24.347 Flush (00h): Supported LBA-Change 00:13:24.347 Write (01h): Supported LBA-Change 00:13:24.347 Read (02h): Supported 00:13:24.347 Compare (05h): Supported 00:13:24.347 Write Zeroes (08h): Supported LBA-Change 00:13:24.347 Dataset Management (09h): Supported LBA-Change 00:13:24.347 Copy (19h): Supported LBA-Change 00:13:24.347 00:13:24.347 Error Log 00:13:24.347 ========= 00:13:24.347 00:13:24.347 Arbitration 00:13:24.347 =========== 00:13:24.347 Arbitration Burst: 1 00:13:24.347 00:13:24.347 Power Management 00:13:24.347 ================ 00:13:24.347 Number of Power States: 1 00:13:24.347 Current Power State: Power State #0 00:13:24.347 Power State #0: 00:13:24.347 Max Power: 0.00 W 00:13:24.347 Non-Operational State: Operational 00:13:24.347 Entry Latency: Not Reported 00:13:24.347 Exit Latency: Not Reported 00:13:24.347 Relative Read Throughput: 0 00:13:24.347 Relative Read Latency: 0 00:13:24.347 Relative Write Throughput: 0 00:13:24.347 Relative Write Latency: 0 00:13:24.347 Idle Power: Not Reported 00:13:24.347 Active Power: Not Reported 00:13:24.347 Non-Operational Permissive Mode: Not Supported 00:13:24.347 00:13:24.347 Health Information 00:13:24.347 ================== 00:13:24.347 Critical Warnings: 00:13:24.347 Available Spare Space: OK 00:13:24.347 Temperature: OK 00:13:24.347 Device Reliability: OK 00:13:24.347 Read Only: No 00:13:24.347 Volatile Memory Backup: OK 00:13:24.347 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:24.347 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:24.347 Available Spare: 0% 00:13:24.347 Available Sp[2024-07-25 01:13:46.727167] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:24.347 [2024-07-25 01:13:46.735049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:24.347 [2024-07-25 01:13:46.735079] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:13:24.347 [2024-07-25 01:13:46.735087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.347 [2024-07-25 01:13:46.735093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.347 [2024-07-25 01:13:46.735098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.347 [2024-07-25 01:13:46.735104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.347 [2024-07-25 01:13:46.735150] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:24.347 [2024-07-25 01:13:46.735160] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:24.347 [2024-07-25 01:13:46.736155] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:24.347 [2024-07-25 01:13:46.736196] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:13:24.347 [2024-07-25 01:13:46.736202] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:13:24.347 [2024-07-25 01:13:46.737161] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:24.347 [2024-07-25 01:13:46.737172] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:13:24.347 [2024-07-25 01:13:46.737218] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:24.347 [2024-07-25 01:13:46.738192] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:24.347 are Threshold: 0% 00:13:24.347 Life Percentage Used: 0% 00:13:24.347 Data Units Read: 0 00:13:24.347 Data Units Written: 0 00:13:24.347 Host Read Commands: 0 00:13:24.347 Host Write Commands: 0 00:13:24.347 Controller Busy Time: 0 minutes 00:13:24.347 Power Cycles: 0 00:13:24.347 Power On Hours: 0 hours 00:13:24.347 Unsafe Shutdowns: 0 00:13:24.347 Unrecoverable Media Errors: 0 00:13:24.347 Lifetime Error Log Entries: 0 00:13:24.347 Warning Temperature Time: 0 minutes 00:13:24.347 Critical Temperature Time: 0 minutes 00:13:24.347 00:13:24.347 Number of Queues 00:13:24.347 ================ 00:13:24.347 Number of I/O Submission Queues: 127 00:13:24.347 Number of I/O Completion Queues: 127 00:13:24.347 00:13:24.347 Active Namespaces 00:13:24.347 ================= 00:13:24.347 Namespace ID:1 00:13:24.347 Error Recovery Timeout: Unlimited 00:13:24.347 Command Set Identifier: NVM (00h) 00:13:24.347 Deallocate: Supported 00:13:24.347 Deallocated/Unwritten Error: Not Supported 00:13:24.347 Deallocated Read Value: Unknown 00:13:24.347 Deallocate in Write Zeroes: Not Supported 00:13:24.347 Deallocated Guard Field: 0xFFFF 00:13:24.347 Flush: Supported 00:13:24.347 Reservation: Supported 00:13:24.347 Namespace Sharing Capabilities: Multiple Controllers 00:13:24.347 Size (in LBAs): 131072 (0GiB) 00:13:24.347 Capacity (in LBAs): 131072 (0GiB) 00:13:24.347 Utilization (in LBAs): 131072 (0GiB) 00:13:24.347 NGUID: 33796D9009E14F559249A0FBDBAFFD39 00:13:24.347 UUID: 33796d90-09e1-4f55-9249-a0fbdbaffd39 00:13:24.347 Thin Provisioning: Not Supported 00:13:24.347 Per-NS Atomic Units: Yes 00:13:24.347 Atomic Boundary Size (Normal): 0 00:13:24.347 Atomic Boundary Size (PFail): 0 00:13:24.347 Atomic Boundary Offset: 0 00:13:24.347 Maximum Single Source Range Length: 65535 00:13:24.347 Maximum Copy Length: 65535 00:13:24.347 Maximum Source Range Count: 1 00:13:24.347 NGUID/EUI64 Never Reused: No 00:13:24.347 Namespace Write Protected: No 00:13:24.347 Number of LBA Formats: 1 00:13:24.347 Current LBA Format: LBA Format #00 00:13:24.347 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:24.347 00:13:24.347 01:13:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:24.347 EAL: No free 2048 kB hugepages reported on node 1 00:13:24.606 [2024-07-25 01:13:46.941354] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:29.883 Initializing NVMe Controllers 00:13:29.883 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:29.883 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:29.883 Initialization complete. Launching workers. 00:13:29.883 ======================================================== 00:13:29.883 Latency(us) 00:13:29.883 Device Information : IOPS MiB/s Average min max 00:13:29.883 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39934.82 156.00 3204.84 968.06 6775.59 00:13:29.883 ======================================================== 00:13:29.883 Total : 39934.82 156.00 3204.84 968.06 6775.59 00:13:29.883 00:13:29.883 [2024-07-25 01:13:52.048304] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:29.883 01:13:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:29.883 EAL: No free 2048 kB hugepages reported on node 1 00:13:29.883 [2024-07-25 01:13:52.263926] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:35.218 Initializing NVMe Controllers 00:13:35.218 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:35.218 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:35.218 Initialization complete. Launching workers. 00:13:35.218 ======================================================== 00:13:35.218 Latency(us) 00:13:35.218 Device Information : IOPS MiB/s Average min max 00:13:35.218 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39935.42 156.00 3204.77 982.38 7584.74 00:13:35.218 ======================================================== 00:13:35.218 Total : 39935.42 156.00 3204.77 982.38 7584.74 00:13:35.218 00:13:35.218 [2024-07-25 01:13:57.281975] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:35.218 01:13:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:35.218 EAL: No free 2048 kB hugepages reported on node 1 00:13:35.218 [2024-07-25 01:13:57.480426] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:40.498 [2024-07-25 01:14:02.616138] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:40.498 Initializing NVMe Controllers 00:13:40.498 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:40.498 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:40.498 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:40.498 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:40.498 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:40.498 Initialization complete. Launching workers. 00:13:40.498 Starting thread on core 2 00:13:40.498 Starting thread on core 3 00:13:40.498 Starting thread on core 1 00:13:40.498 01:14:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:40.498 EAL: No free 2048 kB hugepages reported on node 1 00:13:40.498 [2024-07-25 01:14:02.901474] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:43.805 [2024-07-25 01:14:05.961781] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:43.805 Initializing NVMe Controllers 00:13:43.805 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:43.805 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:43.805 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:43.805 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:43.805 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:43.805 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:43.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:43.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:43.805 Initialization complete. Launching workers. 00:13:43.805 Starting thread on core 1 with urgent priority queue 00:13:43.805 Starting thread on core 2 with urgent priority queue 00:13:43.805 Starting thread on core 3 with urgent priority queue 00:13:43.805 Starting thread on core 0 with urgent priority queue 00:13:43.805 SPDK bdev Controller (SPDK2 ) core 0: 3055.00 IO/s 32.73 secs/100000 ios 00:13:43.805 SPDK bdev Controller (SPDK2 ) core 1: 3387.67 IO/s 29.52 secs/100000 ios 00:13:43.805 SPDK bdev Controller (SPDK2 ) core 2: 3578.00 IO/s 27.95 secs/100000 ios 00:13:43.805 SPDK bdev Controller (SPDK2 ) core 3: 2709.33 IO/s 36.91 secs/100000 ios 00:13:43.805 ======================================================== 00:13:43.805 00:13:43.805 01:14:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:43.805 EAL: No free 2048 kB hugepages reported on node 1 00:13:43.805 [2024-07-25 01:14:06.230436] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:43.805 Initializing NVMe Controllers 00:13:43.805 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:43.805 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:43.805 Namespace ID: 1 size: 0GB 00:13:43.805 Initialization complete. 00:13:43.805 INFO: using host memory buffer for IO 00:13:43.805 Hello world! 00:13:43.805 [2024-07-25 01:14:06.240495] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:43.805 01:14:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:44.065 EAL: No free 2048 kB hugepages reported on node 1 00:13:44.065 [2024-07-25 01:14:06.512922] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:45.447 Initializing NVMe Controllers 00:13:45.447 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:45.447 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:45.447 Initialization complete. Launching workers. 00:13:45.447 submit (in ns) avg, min, max = 6183.1, 3283.5, 4000238.3 00:13:45.447 complete (in ns) avg, min, max = 20890.3, 1796.5, 3997766.1 00:13:45.447 00:13:45.447 Submit histogram 00:13:45.447 ================ 00:13:45.447 Range in us Cumulative Count 00:13:45.447 3.283 - 3.297: 0.0550% ( 9) 00:13:45.447 3.297 - 3.311: 0.1893% ( 22) 00:13:45.447 3.311 - 3.325: 0.4091% ( 36) 00:13:45.447 3.325 - 3.339: 1.0075% ( 98) 00:13:45.447 3.339 - 3.353: 3.2424% ( 366) 00:13:45.447 3.353 - 3.367: 7.7731% ( 742) 00:13:45.447 3.367 - 3.381: 13.5129% ( 940) 00:13:45.447 3.381 - 3.395: 19.3686% ( 959) 00:13:45.447 3.395 - 3.409: 25.1206% ( 942) 00:13:45.447 3.409 - 3.423: 30.9153% ( 949) 00:13:45.447 3.423 - 3.437: 35.9223% ( 820) 00:13:45.447 3.437 - 3.450: 41.4606% ( 907) 00:13:45.447 3.450 - 3.464: 46.3211% ( 796) 00:13:45.447 3.464 - 3.478: 50.3328% ( 657) 00:13:45.447 3.478 - 3.492: 54.9246% ( 752) 00:13:45.447 3.492 - 3.506: 62.1176% ( 1178) 00:13:45.447 3.506 - 3.520: 68.6206% ( 1065) 00:13:45.447 3.520 - 3.534: 72.1011% ( 570) 00:13:45.447 3.534 - 3.548: 77.1265% ( 823) 00:13:45.447 3.548 - 3.562: 81.8465% ( 773) 00:13:45.447 3.562 - 3.590: 86.4017% ( 746) 00:13:45.447 3.590 - 3.617: 87.7267% ( 217) 00:13:45.447 3.617 - 3.645: 88.4594% ( 120) 00:13:45.447 3.645 - 3.673: 89.7845% ( 217) 00:13:45.447 3.673 - 3.701: 91.7018% ( 314) 00:13:45.447 3.701 - 3.729: 93.2894% ( 260) 00:13:45.447 3.729 - 3.757: 94.8647% ( 258) 00:13:45.447 3.757 - 3.784: 96.5439% ( 275) 00:13:45.447 3.784 - 3.812: 97.9178% ( 225) 00:13:45.447 3.812 - 3.840: 98.7238% ( 132) 00:13:45.447 3.840 - 3.868: 99.1940% ( 77) 00:13:45.447 3.868 - 3.896: 99.4443% ( 41) 00:13:45.447 3.896 - 3.923: 99.6092% ( 27) 00:13:45.447 3.923 - 3.951: 99.6520% ( 7) 00:13:45.447 3.951 - 3.979: 99.6581% ( 1) 00:13:45.447 3.979 - 4.007: 99.6642% ( 1) 00:13:45.447 4.035 - 4.063: 99.6703% ( 1) 00:13:45.447 4.953 - 4.981: 99.6764% ( 1) 00:13:45.447 5.009 - 5.037: 99.6825% ( 1) 00:13:45.447 5.148 - 5.176: 99.6886% ( 1) 00:13:45.447 5.176 - 5.203: 99.6947% ( 1) 00:13:45.447 5.287 - 5.315: 99.7008% ( 1) 00:13:45.447 5.343 - 5.370: 99.7069% ( 1) 00:13:45.447 5.454 - 5.482: 99.7130% ( 1) 00:13:45.447 5.482 - 5.510: 99.7252% ( 2) 00:13:45.447 5.537 - 5.565: 99.7313% ( 1) 00:13:45.447 5.593 - 5.621: 99.7374% ( 1) 00:13:45.447 5.649 - 5.677: 99.7435% ( 1) 00:13:45.447 5.704 - 5.732: 99.7496% ( 1) 00:13:45.447 5.732 - 5.760: 99.7558% ( 1) 00:13:45.447 5.843 - 5.871: 99.7619% ( 1) 00:13:45.447 5.927 - 5.955: 99.7680% ( 1) 00:13:45.447 5.955 - 5.983: 99.7741% ( 1) 00:13:45.447 5.983 - 6.010: 99.7802% ( 1) 00:13:45.447 6.010 - 6.038: 99.7863% ( 1) 00:13:45.447 6.066 - 6.094: 99.7924% ( 1) 00:13:45.447 6.122 - 6.150: 99.7985% ( 1) 00:13:45.447 6.150 - 6.177: 99.8046% ( 1) 00:13:45.447 6.261 - 6.289: 99.8107% ( 1) 00:13:45.447 6.400 - 6.428: 99.8168% ( 1) 00:13:45.447 6.428 - 6.456: 99.8229% ( 1) 00:13:45.447 6.456 - 6.483: 99.8290% ( 1) 00:13:45.447 6.539 - 6.567: 99.8351% ( 1) 00:13:45.447 6.734 - 6.762: 99.8412% ( 1) 00:13:45.447 6.817 - 6.845: 99.8473% ( 1) 00:13:45.447 6.845 - 6.873: 99.8596% ( 2) 00:13:45.447 6.984 - 7.012: 99.8718% ( 2) 00:13:45.447 7.040 - 7.068: 99.8779% ( 1) 00:13:45.447 7.068 - 7.096: 99.8840% ( 1) 00:13:45.447 7.123 - 7.179: 99.8901% ( 1) 00:13:45.447 7.290 - 7.346: 99.8962% ( 1) 00:13:45.447 7.346 - 7.402: 99.9023% ( 1) 00:13:45.447 7.457 - 7.513: 99.9084% ( 1) 00:13:45.447 7.624 - 7.680: 99.9145% ( 1) 00:13:45.447 7.903 - 7.958: 99.9206% ( 1) 00:13:45.447 9.683 - 9.739: 99.9267% ( 1) 00:13:45.447 9.795 - 9.850: 99.9328% ( 1) 00:13:45.447 3989.148 - 4017.642: 100.0000% ( 11) 00:13:45.447 00:13:45.447 Complete histogram 00:13:45.447 ================== 00:13:45.447 Ra[2024-07-25 01:14:07.611097] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:45.447 nge in us Cumulative Count 00:13:45.447 1.795 - 1.809: 0.0061% ( 1) 00:13:45.447 1.809 - 1.823: 0.1771% ( 28) 00:13:45.447 1.823 - 1.837: 2.6806% ( 410) 00:13:45.447 1.837 - 1.850: 5.5077% ( 463) 00:13:45.447 1.850 - 1.864: 6.8328% ( 217) 00:13:45.447 1.864 - 1.878: 8.8172% ( 325) 00:13:45.447 1.878 - 1.892: 45.2403% ( 5965) 00:13:45.447 1.892 - 1.906: 86.8963% ( 6822) 00:13:45.447 1.906 - 1.920: 93.9610% ( 1157) 00:13:45.447 1.920 - 1.934: 96.1348% ( 356) 00:13:45.447 1.934 - 1.948: 96.7210% ( 96) 00:13:45.447 1.948 - 1.962: 97.4660% ( 122) 00:13:45.447 1.962 - 1.976: 98.4429% ( 160) 00:13:45.447 1.976 - 1.990: 99.0291% ( 96) 00:13:45.447 1.990 - 2.003: 99.2245% ( 32) 00:13:45.447 2.003 - 2.017: 99.3100% ( 14) 00:13:45.447 2.017 - 2.031: 99.3161% ( 1) 00:13:45.447 2.031 - 2.045: 99.3222% ( 1) 00:13:45.447 2.045 - 2.059: 99.3405% ( 3) 00:13:45.447 2.101 - 2.115: 99.3466% ( 1) 00:13:45.447 2.323 - 2.337: 99.3528% ( 1) 00:13:45.447 3.534 - 3.548: 99.3589% ( 1) 00:13:45.447 3.673 - 3.701: 99.3650% ( 1) 00:13:45.447 3.896 - 3.923: 99.3711% ( 1) 00:13:45.447 4.146 - 4.174: 99.3772% ( 1) 00:13:45.447 4.174 - 4.202: 99.3833% ( 1) 00:13:45.447 4.230 - 4.257: 99.3894% ( 1) 00:13:45.448 4.341 - 4.369: 99.3955% ( 1) 00:13:45.448 4.369 - 4.397: 99.4016% ( 1) 00:13:45.448 4.591 - 4.619: 99.4077% ( 1) 00:13:45.448 4.730 - 4.758: 99.4138% ( 1) 00:13:45.448 4.814 - 4.842: 99.4260% ( 2) 00:13:45.448 4.842 - 4.870: 99.4321% ( 1) 00:13:45.448 4.953 - 4.981: 99.4382% ( 1) 00:13:45.448 4.981 - 5.009: 99.4443% ( 1) 00:13:45.448 5.037 - 5.064: 99.4504% ( 1) 00:13:45.448 5.092 - 5.120: 99.4566% ( 1) 00:13:45.448 5.259 - 5.287: 99.4688% ( 2) 00:13:45.448 5.287 - 5.315: 99.4749% ( 1) 00:13:45.448 5.370 - 5.398: 99.4810% ( 1) 00:13:45.448 5.482 - 5.510: 99.4871% ( 1) 00:13:45.448 5.677 - 5.704: 99.4932% ( 1) 00:13:45.448 5.816 - 5.843: 99.4993% ( 1) 00:13:45.448 5.955 - 5.983: 99.5054% ( 1) 00:13:45.448 6.010 - 6.038: 99.5115% ( 1) 00:13:45.448 6.066 - 6.094: 99.5176% ( 1) 00:13:45.448 6.984 - 7.012: 99.5237% ( 1) 00:13:45.448 3490.504 - 3504.751: 99.5298% ( 1) 00:13:45.448 3989.148 - 4017.642: 100.0000% ( 77) 00:13:45.448 00:13:45.448 01:14:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:45.448 01:14:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:45.448 01:14:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:45.448 01:14:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:45.448 01:14:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:45.448 [ 00:13:45.448 { 00:13:45.448 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:45.448 "subtype": "Discovery", 00:13:45.448 "listen_addresses": [], 00:13:45.448 "allow_any_host": true, 00:13:45.448 "hosts": [] 00:13:45.448 }, 00:13:45.448 { 00:13:45.448 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:45.448 "subtype": "NVMe", 00:13:45.448 "listen_addresses": [ 00:13:45.448 { 00:13:45.448 "trtype": "VFIOUSER", 00:13:45.448 "adrfam": "IPv4", 00:13:45.448 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:45.448 "trsvcid": "0" 00:13:45.448 } 00:13:45.448 ], 00:13:45.448 "allow_any_host": true, 00:13:45.448 "hosts": [], 00:13:45.448 "serial_number": "SPDK1", 00:13:45.448 "model_number": "SPDK bdev Controller", 00:13:45.448 "max_namespaces": 32, 00:13:45.448 "min_cntlid": 1, 00:13:45.448 "max_cntlid": 65519, 00:13:45.448 "namespaces": [ 00:13:45.448 { 00:13:45.448 "nsid": 1, 00:13:45.448 "bdev_name": "Malloc1", 00:13:45.448 "name": "Malloc1", 00:13:45.448 "nguid": "9C4834E1F26E49E5A8AC1CD180CA63A3", 00:13:45.448 "uuid": "9c4834e1-f26e-49e5-a8ac-1cd180ca63a3" 00:13:45.448 }, 00:13:45.448 { 00:13:45.448 "nsid": 2, 00:13:45.448 "bdev_name": "Malloc3", 00:13:45.448 "name": "Malloc3", 00:13:45.448 "nguid": "E2CF0F8D59494DB2A9531F6FB4E5D7F9", 00:13:45.448 "uuid": "e2cf0f8d-5949-4db2-a953-1f6fb4e5d7f9" 00:13:45.448 } 00:13:45.448 ] 00:13:45.448 }, 00:13:45.448 { 00:13:45.448 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:45.448 "subtype": "NVMe", 00:13:45.448 "listen_addresses": [ 00:13:45.448 { 00:13:45.448 "trtype": "VFIOUSER", 00:13:45.448 "adrfam": "IPv4", 00:13:45.448 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:45.448 "trsvcid": "0" 00:13:45.448 } 00:13:45.448 ], 00:13:45.448 "allow_any_host": true, 00:13:45.448 "hosts": [], 00:13:45.448 "serial_number": "SPDK2", 00:13:45.448 "model_number": "SPDK bdev Controller", 00:13:45.448 "max_namespaces": 32, 00:13:45.448 "min_cntlid": 1, 00:13:45.448 "max_cntlid": 65519, 00:13:45.448 "namespaces": [ 00:13:45.448 { 00:13:45.448 "nsid": 1, 00:13:45.448 "bdev_name": "Malloc2", 00:13:45.448 "name": "Malloc2", 00:13:45.448 "nguid": "33796D9009E14F559249A0FBDBAFFD39", 00:13:45.448 "uuid": "33796d90-09e1-4f55-9249-a0fbdbaffd39" 00:13:45.448 } 00:13:45.448 ] 00:13:45.448 } 00:13:45.448 ] 00:13:45.448 01:14:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:45.448 01:14:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:45.448 01:14:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=829281 00:13:45.448 01:14:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:45.448 01:14:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:13:45.448 01:14:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:45.448 01:14:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:45.448 01:14:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:13:45.448 01:14:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:45.448 01:14:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:45.448 EAL: No free 2048 kB hugepages reported on node 1 00:13:45.708 [2024-07-25 01:14:07.975486] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:45.708 Malloc4 00:13:45.708 01:14:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:45.968 [2024-07-25 01:14:08.210254] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:45.968 01:14:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:45.968 Asynchronous Event Request test 00:13:45.968 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:45.968 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:45.968 Registering asynchronous event callbacks... 00:13:45.968 Starting namespace attribute notice tests for all controllers... 00:13:45.968 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:45.968 aer_cb - Changed Namespace 00:13:45.968 Cleaning up... 00:13:45.968 [ 00:13:45.968 { 00:13:45.968 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:45.968 "subtype": "Discovery", 00:13:45.968 "listen_addresses": [], 00:13:45.968 "allow_any_host": true, 00:13:45.968 "hosts": [] 00:13:45.968 }, 00:13:45.968 { 00:13:45.968 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:45.968 "subtype": "NVMe", 00:13:45.968 "listen_addresses": [ 00:13:45.968 { 00:13:45.968 "trtype": "VFIOUSER", 00:13:45.968 "adrfam": "IPv4", 00:13:45.968 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:45.968 "trsvcid": "0" 00:13:45.968 } 00:13:45.968 ], 00:13:45.968 "allow_any_host": true, 00:13:45.968 "hosts": [], 00:13:45.968 "serial_number": "SPDK1", 00:13:45.968 "model_number": "SPDK bdev Controller", 00:13:45.968 "max_namespaces": 32, 00:13:45.968 "min_cntlid": 1, 00:13:45.968 "max_cntlid": 65519, 00:13:45.968 "namespaces": [ 00:13:45.968 { 00:13:45.968 "nsid": 1, 00:13:45.968 "bdev_name": "Malloc1", 00:13:45.968 "name": "Malloc1", 00:13:45.968 "nguid": "9C4834E1F26E49E5A8AC1CD180CA63A3", 00:13:45.968 "uuid": "9c4834e1-f26e-49e5-a8ac-1cd180ca63a3" 00:13:45.968 }, 00:13:45.968 { 00:13:45.968 "nsid": 2, 00:13:45.968 "bdev_name": "Malloc3", 00:13:45.968 "name": "Malloc3", 00:13:45.968 "nguid": "E2CF0F8D59494DB2A9531F6FB4E5D7F9", 00:13:45.968 "uuid": "e2cf0f8d-5949-4db2-a953-1f6fb4e5d7f9" 00:13:45.968 } 00:13:45.968 ] 00:13:45.968 }, 00:13:45.968 { 00:13:45.968 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:45.968 "subtype": "NVMe", 00:13:45.968 "listen_addresses": [ 00:13:45.968 { 00:13:45.968 "trtype": "VFIOUSER", 00:13:45.968 "adrfam": "IPv4", 00:13:45.968 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:45.968 "trsvcid": "0" 00:13:45.968 } 00:13:45.968 ], 00:13:45.968 "allow_any_host": true, 00:13:45.968 "hosts": [], 00:13:45.968 "serial_number": "SPDK2", 00:13:45.968 "model_number": "SPDK bdev Controller", 00:13:45.968 "max_namespaces": 32, 00:13:45.968 "min_cntlid": 1, 00:13:45.968 "max_cntlid": 65519, 00:13:45.968 "namespaces": [ 00:13:45.968 { 00:13:45.968 "nsid": 1, 00:13:45.968 "bdev_name": "Malloc2", 00:13:45.968 "name": "Malloc2", 00:13:45.968 "nguid": "33796D9009E14F559249A0FBDBAFFD39", 00:13:45.968 "uuid": "33796d90-09e1-4f55-9249-a0fbdbaffd39" 00:13:45.968 }, 00:13:45.968 { 00:13:45.968 "nsid": 2, 00:13:45.968 "bdev_name": "Malloc4", 00:13:45.968 "name": "Malloc4", 00:13:45.969 "nguid": "E834E9FE56E84FD2A5833EC11F9C790A", 00:13:45.969 "uuid": "e834e9fe-56e8-4fd2-a583-3ec11f9c790a" 00:13:45.969 } 00:13:45.969 ] 00:13:45.969 } 00:13:45.969 ] 00:13:45.969 01:14:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 829281 00:13:45.969 01:14:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:45.969 01:14:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 821643 00:13:45.969 01:14:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 821643 ']' 00:13:45.969 01:14:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 821643 00:13:45.969 01:14:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:45.969 01:14:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:45.969 01:14:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 821643 00:13:46.228 01:14:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:46.228 01:14:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:46.228 01:14:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 821643' 00:13:46.228 killing process with pid 821643 00:13:46.228 01:14:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 821643 00:13:46.228 01:14:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 821643 00:13:46.489 01:14:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:46.489 01:14:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:46.489 01:14:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:46.489 01:14:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:46.489 01:14:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:46.489 01:14:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=829513 00:13:46.489 01:14:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 829513' 00:13:46.489 Process pid: 829513 00:13:46.489 01:14:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:46.489 01:14:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:46.489 01:14:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 829513 00:13:46.489 01:14:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 829513 ']' 00:13:46.489 01:14:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.489 01:14:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:46.489 01:14:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.489 01:14:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:46.489 01:14:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:46.489 [2024-07-25 01:14:08.783732] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:46.489 [2024-07-25 01:14:08.784611] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:13:46.489 [2024-07-25 01:14:08.784650] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:46.489 EAL: No free 2048 kB hugepages reported on node 1 00:13:46.489 [2024-07-25 01:14:08.838531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:46.489 [2024-07-25 01:14:08.907132] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:46.489 [2024-07-25 01:14:08.907171] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:46.489 [2024-07-25 01:14:08.907179] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:46.489 [2024-07-25 01:14:08.907184] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:46.489 [2024-07-25 01:14:08.907189] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:46.489 [2024-07-25 01:14:08.907284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:46.489 [2024-07-25 01:14:08.907401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:46.489 [2024-07-25 01:14:08.907468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:46.489 [2024-07-25 01:14:08.907469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.748 [2024-07-25 01:14:08.985717] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:46.748 [2024-07-25 01:14:08.985768] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:46.748 [2024-07-25 01:14:08.985924] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:46.748 [2024-07-25 01:14:08.986218] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:46.748 [2024-07-25 01:14:08.986398] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:47.318 01:14:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:47.318 01:14:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:13:47.318 01:14:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:48.256 01:14:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:48.515 01:14:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:48.515 01:14:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:48.515 01:14:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:48.515 01:14:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:48.515 01:14:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:48.515 Malloc1 00:13:48.515 01:14:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:48.775 01:14:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:49.034 01:14:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:49.034 01:14:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:49.034 01:14:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:49.034 01:14:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:49.293 Malloc2 00:13:49.293 01:14:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:49.553 01:14:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:49.812 01:14:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:49.812 01:14:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:49.812 01:14:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 829513 00:13:49.812 01:14:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 829513 ']' 00:13:49.812 01:14:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 829513 00:13:49.812 01:14:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:49.812 01:14:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:49.812 01:14:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 829513 00:13:49.812 01:14:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:49.812 01:14:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:49.812 01:14:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 829513' 00:13:49.812 killing process with pid 829513 00:13:49.812 01:14:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 829513 00:13:49.812 01:14:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 829513 00:13:50.072 01:14:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:50.072 01:14:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:50.072 00:13:50.072 real 0m51.237s 00:13:50.072 user 3m22.961s 00:13:50.072 sys 0m3.551s 00:13:50.072 01:14:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:50.072 01:14:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:50.072 ************************************ 00:13:50.072 END TEST nvmf_vfio_user 00:13:50.072 ************************************ 00:13:50.072 01:14:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:50.072 01:14:12 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:50.072 01:14:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:50.072 01:14:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:50.072 01:14:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:50.333 ************************************ 00:13:50.333 START TEST nvmf_vfio_user_nvme_compliance 00:13:50.333 ************************************ 00:13:50.333 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:50.333 * Looking for test storage... 00:13:50.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:50.333 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:50.333 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:50.333 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:50.333 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:50.333 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:50.333 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:50.333 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:50.333 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:50.333 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:50.333 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:50.333 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:50.333 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:50.333 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:50.333 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:50.333 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:50.334 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:50.334 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:50.334 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:50.334 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:50.334 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:50.334 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:50.334 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:50.334 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.334 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.334 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.334 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:50.334 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.334 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:13:50.334 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:50.334 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:50.334 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:50.334 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:50.334 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:50.334 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:50.334 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:50.334 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:50.334 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:50.334 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:50.334 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:50.334 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:50.334 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:50.334 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=830272 00:13:50.334 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 830272' 00:13:50.334 Process pid: 830272 00:13:50.334 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:50.334 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:50.334 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 830272 00:13:50.334 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 830272 ']' 00:13:50.334 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.334 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:50.334 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.334 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:50.334 01:14:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:50.334 [2024-07-25 01:14:12.744079] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:13:50.334 [2024-07-25 01:14:12.744129] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:50.334 EAL: No free 2048 kB hugepages reported on node 1 00:13:50.334 [2024-07-25 01:14:12.796437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:50.593 [2024-07-25 01:14:12.872307] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:50.593 [2024-07-25 01:14:12.872348] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:50.593 [2024-07-25 01:14:12.872355] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:50.593 [2024-07-25 01:14:12.872361] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:50.593 [2024-07-25 01:14:12.872366] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:50.593 [2024-07-25 01:14:12.872411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:50.593 [2024-07-25 01:14:12.872508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:50.593 [2024-07-25 01:14:12.872510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.160 01:14:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:51.160 01:14:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:13:51.160 01:14:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:52.097 01:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:52.097 01:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:52.097 01:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:52.097 01:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.097 01:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:52.097 01:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.097 01:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:52.097 01:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:52.097 01:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.097 01:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:52.097 malloc0 00:13:52.097 01:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.097 01:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:52.097 01:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.097 01:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:52.356 01:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.356 01:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:52.356 01:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.356 01:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:52.356 01:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.356 01:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:52.356 01:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.356 01:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:52.356 01:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.356 01:14:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:52.356 EAL: No free 2048 kB hugepages reported on node 1 00:13:52.356 00:13:52.356 00:13:52.356 CUnit - A unit testing framework for C - Version 2.1-3 00:13:52.356 http://cunit.sourceforge.net/ 00:13:52.356 00:13:52.356 00:13:52.356 Suite: nvme_compliance 00:13:52.356 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-25 01:14:14.753034] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:52.356 [2024-07-25 01:14:14.754370] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:52.356 [2024-07-25 01:14:14.754385] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:52.356 [2024-07-25 01:14:14.754391] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:52.356 [2024-07-25 01:14:14.756057] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:52.356 passed 00:13:52.356 Test: admin_identify_ctrlr_verify_fused ...[2024-07-25 01:14:14.834626] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:52.356 [2024-07-25 01:14:14.837644] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:52.616 passed 00:13:52.616 Test: admin_identify_ns ...[2024-07-25 01:14:14.918481] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:52.616 [2024-07-25 01:14:14.981052] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:52.616 [2024-07-25 01:14:14.990062] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:52.616 [2024-07-25 01:14:15.011155] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:52.616 passed 00:13:52.616 Test: admin_get_features_mandatory_features ...[2024-07-25 01:14:15.086408] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:52.616 [2024-07-25 01:14:15.089428] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:52.875 passed 00:13:52.875 Test: admin_get_features_optional_features ...[2024-07-25 01:14:15.169944] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:52.875 [2024-07-25 01:14:15.172963] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:52.875 passed 00:13:52.875 Test: admin_set_features_number_of_queues ...[2024-07-25 01:14:15.250920] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:52.875 [2024-07-25 01:14:15.356137] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:53.134 passed 00:13:53.134 Test: admin_get_log_page_mandatory_logs ...[2024-07-25 01:14:15.430283] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:53.134 [2024-07-25 01:14:15.433301] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:53.134 passed 00:13:53.134 Test: admin_get_log_page_with_lpo ...[2024-07-25 01:14:15.510156] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:53.134 [2024-07-25 01:14:15.580054] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:53.134 [2024-07-25 01:14:15.593110] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:53.134 passed 00:13:53.394 Test: fabric_property_get ...[2024-07-25 01:14:15.668293] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:53.394 [2024-07-25 01:14:15.669531] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:53.394 [2024-07-25 01:14:15.671315] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:53.394 passed 00:13:53.394 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-25 01:14:15.749809] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:53.394 [2024-07-25 01:14:15.751040] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:53.394 [2024-07-25 01:14:15.752830] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:53.394 passed 00:13:53.394 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-25 01:14:15.830744] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:53.653 [2024-07-25 01:14:15.916055] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:53.653 [2024-07-25 01:14:15.932049] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:53.653 [2024-07-25 01:14:15.937147] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:53.653 passed 00:13:53.653 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-25 01:14:16.011327] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:53.653 [2024-07-25 01:14:16.012565] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:53.653 [2024-07-25 01:14:16.014349] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:53.653 passed 00:13:53.653 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-25 01:14:16.094507] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:53.911 [2024-07-25 01:14:16.170053] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:53.911 [2024-07-25 01:14:16.194048] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:53.911 [2024-07-25 01:14:16.199144] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:53.911 passed 00:13:53.912 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-25 01:14:16.274432] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:53.912 [2024-07-25 01:14:16.275665] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:53.912 [2024-07-25 01:14:16.275686] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:53.912 [2024-07-25 01:14:16.277449] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:53.912 passed 00:13:53.912 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-25 01:14:16.356372] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:54.170 [2024-07-25 01:14:16.448061] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:54.170 [2024-07-25 01:14:16.456049] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:54.170 [2024-07-25 01:14:16.464055] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:54.170 [2024-07-25 01:14:16.472051] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:54.170 [2024-07-25 01:14:16.501124] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:54.170 passed 00:13:54.170 Test: admin_create_io_sq_verify_pc ...[2024-07-25 01:14:16.578301] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:54.171 [2024-07-25 01:14:16.597060] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:54.171 [2024-07-25 01:14:16.614469] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:54.171 passed 00:13:54.430 Test: admin_create_io_qp_max_qps ...[2024-07-25 01:14:16.691981] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:55.370 [2024-07-25 01:14:17.783052] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:55.686 [2024-07-25 01:14:18.171018] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:55.946 passed 00:13:55.946 Test: admin_create_io_sq_shared_cq ...[2024-07-25 01:14:18.247210] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:55.946 [2024-07-25 01:14:18.379056] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:55.946 [2024-07-25 01:14:18.416112] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:56.205 passed 00:13:56.205 00:13:56.205 Run Summary: Type Total Ran Passed Failed Inactive 00:13:56.205 suites 1 1 n/a 0 0 00:13:56.205 tests 18 18 18 0 0 00:13:56.205 asserts 360 360 360 0 n/a 00:13:56.205 00:13:56.205 Elapsed time = 1.508 seconds 00:13:56.205 01:14:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 830272 00:13:56.205 01:14:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 830272 ']' 00:13:56.205 01:14:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 830272 00:13:56.205 01:14:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:13:56.205 01:14:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:56.205 01:14:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 830272 00:13:56.205 01:14:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:56.205 01:14:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:56.206 01:14:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 830272' 00:13:56.206 killing process with pid 830272 00:13:56.206 01:14:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 830272 00:13:56.206 01:14:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 830272 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:56.466 00:13:56.466 real 0m6.130s 00:13:56.466 user 0m17.533s 00:13:56.466 sys 0m0.432s 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:56.466 ************************************ 00:13:56.466 END TEST nvmf_vfio_user_nvme_compliance 00:13:56.466 ************************************ 00:13:56.466 01:14:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:56.466 01:14:18 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:56.466 01:14:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:56.466 01:14:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:56.466 01:14:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:56.466 ************************************ 00:13:56.466 START TEST nvmf_vfio_user_fuzz 00:13:56.466 ************************************ 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:56.466 * Looking for test storage... 00:13:56.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=831261 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 831261' 00:13:56.466 Process pid: 831261 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 831261 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 831261 ']' 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:56.466 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.467 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:56.467 01:14:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:57.406 01:14:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:57.406 01:14:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:13:57.406 01:14:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:58.351 01:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:58.351 01:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.351 01:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:58.351 01:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.351 01:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:58.351 01:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:58.351 01:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.351 01:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:58.351 malloc0 00:13:58.351 01:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.351 01:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:58.351 01:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.351 01:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:58.351 01:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.351 01:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:58.351 01:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.351 01:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:58.351 01:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.351 01:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:58.351 01:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.351 01:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:58.351 01:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.351 01:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:58.351 01:14:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:30.438 Fuzzing completed. Shutting down the fuzz application 00:14:30.438 00:14:30.438 Dumping successful admin opcodes: 00:14:30.438 8, 9, 10, 24, 00:14:30.438 Dumping successful io opcodes: 00:14:30.438 0, 00:14:30.438 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1065104, total successful commands: 4199, random_seed: 1017079360 00:14:30.438 NS: 0x200003a1ef00 admin qp, Total commands completed: 264294, total successful commands: 2127, random_seed: 3373664384 00:14:30.438 01:14:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:30.438 01:14:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.438 01:14:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:30.438 01:14:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.438 01:14:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 831261 00:14:30.438 01:14:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 831261 ']' 00:14:30.438 01:14:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 831261 00:14:30.438 01:14:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:14:30.438 01:14:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:30.438 01:14:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 831261 00:14:30.438 01:14:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:30.438 01:14:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:30.438 01:14:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 831261' 00:14:30.438 killing process with pid 831261 00:14:30.438 01:14:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 831261 00:14:30.438 01:14:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 831261 00:14:30.438 01:14:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:30.438 01:14:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:30.438 00:14:30.438 real 0m32.765s 00:14:30.438 user 0m31.562s 00:14:30.438 sys 0m30.724s 00:14:30.438 01:14:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:30.438 01:14:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:30.438 ************************************ 00:14:30.438 END TEST nvmf_vfio_user_fuzz 00:14:30.438 ************************************ 00:14:30.438 01:14:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:30.438 01:14:51 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:30.438 01:14:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:30.438 01:14:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:30.438 01:14:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:30.438 ************************************ 00:14:30.438 START TEST nvmf_host_management 00:14:30.439 ************************************ 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:30.439 * Looking for test storage... 00:14:30.439 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:14:30.439 01:14:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:34.634 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:34.634 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:14:34.634 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:34.634 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:34.634 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:34.634 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:34.634 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:34.634 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:14:34.634 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:34.634 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:14:34.634 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:14:34.634 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:14:34.634 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:14:34.634 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:14:34.634 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:14:34.634 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:34.634 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:34.634 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:34.634 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:34.634 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:34.634 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:34.634 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:34.634 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:34.634 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:34.634 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:34.634 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:34.634 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:34.634 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:34.634 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:34.634 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:34.634 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:34.634 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:34.634 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:34.634 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:34.634 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:34.634 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:34.634 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:34.634 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:34.634 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:34.634 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:34.634 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:34.634 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:34.634 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:34.634 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:34.634 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:34.634 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:34.634 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:34.635 Found net devices under 0000:86:00.0: cvl_0_0 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:34.635 Found net devices under 0000:86:00.1: cvl_0_1 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:34.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:34.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.333 ms 00:14:34.635 00:14:34.635 --- 10.0.0.2 ping statistics --- 00:14:34.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:34.635 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:34.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:34.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.357 ms 00:14:34.635 00:14:34.635 --- 10.0.0.1 ping statistics --- 00:14:34.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:34.635 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=839666 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 839666 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 839666 ']' 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:34.635 01:14:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:34.635 [2024-07-25 01:14:56.949982] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:14:34.635 [2024-07-25 01:14:56.950025] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:34.635 EAL: No free 2048 kB hugepages reported on node 1 00:14:34.635 [2024-07-25 01:14:57.006305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:34.635 [2024-07-25 01:14:57.087550] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:34.635 [2024-07-25 01:14:57.087586] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:34.635 [2024-07-25 01:14:57.087593] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:34.635 [2024-07-25 01:14:57.087599] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:34.635 [2024-07-25 01:14:57.087604] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:34.635 [2024-07-25 01:14:57.087700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:34.635 [2024-07-25 01:14:57.087785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:34.635 [2024-07-25 01:14:57.087892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:34.635 [2024-07-25 01:14:57.087894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:35.574 01:14:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:35.574 01:14:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:14:35.574 01:14:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:35.574 01:14:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:35.574 01:14:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:35.574 01:14:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:35.574 01:14:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:35.574 01:14:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.574 01:14:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:35.574 [2024-07-25 01:14:57.806909] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:35.574 01:14:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.574 01:14:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:35.574 01:14:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:35.574 01:14:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:35.574 01:14:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:35.574 01:14:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:14:35.574 01:14:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:14:35.575 01:14:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.575 01:14:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:35.575 Malloc0 00:14:35.575 [2024-07-25 01:14:57.866315] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:35.575 01:14:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.575 01:14:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:35.575 01:14:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:35.575 01:14:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:35.575 01:14:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=839822 00:14:35.575 01:14:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 839822 /var/tmp/bdevperf.sock 00:14:35.575 01:14:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:35.575 01:14:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 839822 ']' 00:14:35.575 01:14:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:35.575 01:14:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:35.575 01:14:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:35.575 01:14:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:35.575 01:14:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:35.575 01:14:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:35.575 { 00:14:35.575 "params": { 00:14:35.575 "name": "Nvme$subsystem", 00:14:35.575 "trtype": "$TEST_TRANSPORT", 00:14:35.575 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:35.575 "adrfam": "ipv4", 00:14:35.575 "trsvcid": "$NVMF_PORT", 00:14:35.575 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:35.575 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:35.575 "hdgst": ${hdgst:-false}, 00:14:35.575 "ddgst": ${ddgst:-false} 00:14:35.575 }, 00:14:35.575 "method": "bdev_nvme_attach_controller" 00:14:35.575 } 00:14:35.575 EOF 00:14:35.575 )") 00:14:35.575 01:14:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:35.575 01:14:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:35.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:35.575 01:14:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:35.575 01:14:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:35.575 01:14:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:35.575 01:14:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:35.575 01:14:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:35.575 01:14:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:35.575 "params": { 00:14:35.575 "name": "Nvme0", 00:14:35.575 "trtype": "tcp", 00:14:35.575 "traddr": "10.0.0.2", 00:14:35.575 "adrfam": "ipv4", 00:14:35.575 "trsvcid": "4420", 00:14:35.575 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:35.575 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:35.575 "hdgst": false, 00:14:35.575 "ddgst": false 00:14:35.575 }, 00:14:35.575 "method": "bdev_nvme_attach_controller" 00:14:35.575 }' 00:14:35.575 [2024-07-25 01:14:57.957578] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:14:35.575 [2024-07-25 01:14:57.957623] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid839822 ] 00:14:35.575 EAL: No free 2048 kB hugepages reported on node 1 00:14:35.575 [2024-07-25 01:14:58.013086] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.835 [2024-07-25 01:14:58.087345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.835 Running I/O for 10 seconds... 00:14:36.406 01:14:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:36.406 01:14:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:14:36.406 01:14:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:36.406 01:14:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.406 01:14:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:36.406 01:14:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.406 01:14:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:36.406 01:14:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:36.406 01:14:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:36.406 01:14:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:36.406 01:14:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:14:36.406 01:14:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:14:36.406 01:14:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:36.406 01:14:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:36.406 01:14:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:36.406 01:14:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:36.406 01:14:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.406 01:14:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:36.406 01:14:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.406 01:14:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:14:36.406 01:14:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:14:36.406 01:14:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:14:36.406 01:14:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:14:36.406 01:14:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:14:36.406 01:14:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:36.407 01:14:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.407 01:14:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:36.407 [2024-07-25 01:14:58.841527] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1342570 is same with the state(5) to be set 00:14:36.407 [2024-07-25 01:14:58.841570] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1342570 is same with the state(5) to be set 00:14:36.407 [2024-07-25 01:14:58.841577] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1342570 is same with the state(5) to be set 00:14:36.407 [2024-07-25 01:14:58.841584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1342570 is same with the state(5) to be set 00:14:36.407 [2024-07-25 01:14:58.841590] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1342570 is same with the state(5) to be set 00:14:36.407 [2024-07-25 01:14:58.841596] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1342570 is same with the state(5) to be set 00:14:36.407 [2024-07-25 01:14:58.841603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1342570 is same with the state(5) to be set 00:14:36.407 01:14:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.407 01:14:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:36.407 01:14:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.407 01:14:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:36.407 [2024-07-25 01:14:58.853866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:36.407 [2024-07-25 01:14:58.853897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.407 [2024-07-25 01:14:58.853907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:36.407 [2024-07-25 01:14:58.853914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.407 [2024-07-25 01:14:58.853921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:36.407 [2024-07-25 01:14:58.853928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.407 [2024-07-25 01:14:58.853935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:36.407 [2024-07-25 01:14:58.853941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.407 [2024-07-25 01:14:58.853948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47980 is same with the state(5) to be set 00:14:36.407 [2024-07-25 01:14:58.854546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.407 [2024-07-25 01:14:58.854566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.407 [2024-07-25 01:14:58.854581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.407 [2024-07-25 01:14:58.854588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.407 [2024-07-25 01:14:58.854597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.407 [2024-07-25 01:14:58.854605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.407 [2024-07-25 01:14:58.854613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.407 [2024-07-25 01:14:58.854623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.407 [2024-07-25 01:14:58.854632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.407 [2024-07-25 01:14:58.854639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.407 [2024-07-25 01:14:58.854648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.407 [2024-07-25 01:14:58.854655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.407 [2024-07-25 01:14:58.854664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.407 [2024-07-25 01:14:58.854671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.407 [2024-07-25 01:14:58.854680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.407 [2024-07-25 01:14:58.854687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.407 [2024-07-25 01:14:58.854696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.407 [2024-07-25 01:14:58.854703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.407 [2024-07-25 01:14:58.854713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.407 [2024-07-25 01:14:58.854720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.407 [2024-07-25 01:14:58.854729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.407 [2024-07-25 01:14:58.854736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.407 [2024-07-25 01:14:58.854745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.407 [2024-07-25 01:14:58.854752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.407 [2024-07-25 01:14:58.854761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.407 [2024-07-25 01:14:58.854769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.407 [2024-07-25 01:14:58.854778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.407 [2024-07-25 01:14:58.854786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.407 [2024-07-25 01:14:58.854794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.407 [2024-07-25 01:14:58.854802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.407 [2024-07-25 01:14:58.854811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.407 [2024-07-25 01:14:58.854819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.407 [2024-07-25 01:14:58.854830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.407 [2024-07-25 01:14:58.854838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.407 [2024-07-25 01:14:58.854848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.407 [2024-07-25 01:14:58.854857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.407 [2024-07-25 01:14:58.854865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.407 [2024-07-25 01:14:58.854874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.407 [2024-07-25 01:14:58.854884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.407 [2024-07-25 01:14:58.854893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.407 [2024-07-25 01:14:58.854902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.407 [2024-07-25 01:14:58.854910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.407 [2024-07-25 01:14:58.854920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.407 [2024-07-25 01:14:58.854928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.407 [2024-07-25 01:14:58.854937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.407 [2024-07-25 01:14:58.854946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.407 [2024-07-25 01:14:58.854957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.407 [2024-07-25 01:14:58.854964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.407 [2024-07-25 01:14:58.854974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.407 [2024-07-25 01:14:58.854982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.407 [2024-07-25 01:14:58.854992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.407 [2024-07-25 01:14:58.855000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.407 [2024-07-25 01:14:58.855009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.407 [2024-07-25 01:14:58.855016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.407 [2024-07-25 01:14:58.855026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.407 [2024-07-25 01:14:58.855034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.407 [2024-07-25 01:14:58.855050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.407 [2024-07-25 01:14:58.855059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.408 [2024-07-25 01:14:58.855068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.408 [2024-07-25 01:14:58.855076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.408 [2024-07-25 01:14:58.855085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.408 [2024-07-25 01:14:58.855093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.408 [2024-07-25 01:14:58.855103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.408 [2024-07-25 01:14:58.855110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.408 [2024-07-25 01:14:58.855119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.408 [2024-07-25 01:14:58.855127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.408 [2024-07-25 01:14:58.855136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.408 [2024-07-25 01:14:58.855143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.408 [2024-07-25 01:14:58.855153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.408 [2024-07-25 01:14:58.855161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.408 [2024-07-25 01:14:58.855170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.408 [2024-07-25 01:14:58.855178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.408 [2024-07-25 01:14:58.855187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.408 [2024-07-25 01:14:58.855194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.408 [2024-07-25 01:14:58.855203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.408 [2024-07-25 01:14:58.855211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.408 [2024-07-25 01:14:58.855220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.408 [2024-07-25 01:14:58.855227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.408 [2024-07-25 01:14:58.855237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.408 [2024-07-25 01:14:58.855244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.408 [2024-07-25 01:14:58.855253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.408 [2024-07-25 01:14:58.855261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.408 [2024-07-25 01:14:58.855275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.408 [2024-07-25 01:14:58.855284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.408 [2024-07-25 01:14:58.855294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.408 [2024-07-25 01:14:58.855302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.408 [2024-07-25 01:14:58.855311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.408 [2024-07-25 01:14:58.855319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.408 [2024-07-25 01:14:58.855329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.408 [2024-07-25 01:14:58.855337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.408 [2024-07-25 01:14:58.855346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.408 [2024-07-25 01:14:58.855354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.408 [2024-07-25 01:14:58.855364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.408 [2024-07-25 01:14:58.855372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.408 [2024-07-25 01:14:58.855381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.408 [2024-07-25 01:14:58.855388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.408 [2024-07-25 01:14:58.855397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.408 [2024-07-25 01:14:58.855407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.408 [2024-07-25 01:14:58.855416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.408 [2024-07-25 01:14:58.855423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.408 [2024-07-25 01:14:58.855432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.408 [2024-07-25 01:14:58.855440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.408 [2024-07-25 01:14:58.855448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.408 [2024-07-25 01:14:58.855456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.408 [2024-07-25 01:14:58.855464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.408 [2024-07-25 01:14:58.855472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.408 [2024-07-25 01:14:58.855481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.408 [2024-07-25 01:14:58.855489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.408 [2024-07-25 01:14:58.855498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.408 [2024-07-25 01:14:58.855506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.408 [2024-07-25 01:14:58.855515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.408 [2024-07-25 01:14:58.855522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.408 [2024-07-25 01:14:58.855530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.408 [2024-07-25 01:14:58.855538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.408 [2024-07-25 01:14:58.855547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.408 [2024-07-25 01:14:58.855554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.408 [2024-07-25 01:14:58.855563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.408 [2024-07-25 01:14:58.855570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.408 [2024-07-25 01:14:58.855579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.408 [2024-07-25 01:14:58.855587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.408 [2024-07-25 01:14:58.855595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.408 [2024-07-25 01:14:58.855603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.408 [2024-07-25 01:14:58.855612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.408 [2024-07-25 01:14:58.855619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.408 [2024-07-25 01:14:58.855629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.408 [2024-07-25 01:14:58.855636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.408 [2024-07-25 01:14:58.855645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:36.408 [2024-07-25 01:14:58.855653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.408 [2024-07-25 01:14:58.855661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2179990 is same with the state(5) to be set 00:14:36.408 [2024-07-25 01:14:58.855713] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2179990 was disconnected and freed. reset controller. 00:14:36.408 [2024-07-25 01:14:58.856610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:36.408 task offset: 73728 on job bdev=Nvme0n1 fails 00:14:36.408 00:14:36.408 Latency(us) 00:14:36.408 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:36.408 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:36.408 Job: Nvme0n1 ended in about 0.57 seconds with error 00:14:36.408 Verification LBA range: start 0x0 length 0x400 00:14:36.408 Nvme0n1 : 0.57 1008.69 63.04 112.08 0.00 56085.64 1617.03 58811.44 00:14:36.408 =================================================================================================================== 00:14:36.408 Total : 1008.69 63.04 112.08 0.00 56085.64 1617.03 58811.44 00:14:36.409 01:14:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.409 [2024-07-25 01:14:58.858219] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:36.409 [2024-07-25 01:14:58.858236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d47980 (9): Bad file descriptor 00:14:36.409 01:14:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:14:36.668 [2024-07-25 01:14:58.915351] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:37.636 01:14:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 839822 00:14:37.636 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (839822) - No such process 00:14:37.636 01:14:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:14:37.636 01:14:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:37.636 01:14:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:37.636 01:14:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:37.636 01:14:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:37.636 01:14:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:37.636 01:14:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:37.636 01:14:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:37.636 { 00:14:37.636 "params": { 00:14:37.636 "name": "Nvme$subsystem", 00:14:37.636 "trtype": "$TEST_TRANSPORT", 00:14:37.636 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:37.636 "adrfam": "ipv4", 00:14:37.636 "trsvcid": "$NVMF_PORT", 00:14:37.636 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:37.636 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:37.636 "hdgst": ${hdgst:-false}, 00:14:37.636 "ddgst": ${ddgst:-false} 00:14:37.636 }, 00:14:37.636 "method": "bdev_nvme_attach_controller" 00:14:37.636 } 00:14:37.636 EOF 00:14:37.636 )") 00:14:37.636 01:14:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:37.636 01:14:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:37.636 01:14:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:37.636 01:14:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:37.636 "params": { 00:14:37.636 "name": "Nvme0", 00:14:37.636 "trtype": "tcp", 00:14:37.636 "traddr": "10.0.0.2", 00:14:37.636 "adrfam": "ipv4", 00:14:37.636 "trsvcid": "4420", 00:14:37.636 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:37.636 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:37.636 "hdgst": false, 00:14:37.636 "ddgst": false 00:14:37.636 }, 00:14:37.636 "method": "bdev_nvme_attach_controller" 00:14:37.636 }' 00:14:37.636 [2024-07-25 01:14:59.911423] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:14:37.636 [2024-07-25 01:14:59.911473] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid840286 ] 00:14:37.636 EAL: No free 2048 kB hugepages reported on node 1 00:14:37.636 [2024-07-25 01:14:59.966006] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.636 [2024-07-25 01:15:00.049451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.895 Running I/O for 1 seconds... 00:14:39.276 00:14:39.276 Latency(us) 00:14:39.276 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.276 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:39.276 Verification LBA range: start 0x0 length 0x400 00:14:39.276 Nvme0n1 : 1.03 1058.83 66.18 0.00 0.00 59696.66 13620.09 59723.24 00:14:39.276 =================================================================================================================== 00:14:39.276 Total : 1058.83 66.18 0.00 0.00 59696.66 13620.09 59723.24 00:14:39.276 01:15:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:14:39.276 01:15:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:39.276 01:15:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:14:39.276 01:15:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:39.276 01:15:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:14:39.276 01:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:39.276 01:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:14:39.276 01:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:39.276 01:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:14:39.276 01:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:39.276 01:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:39.276 rmmod nvme_tcp 00:14:39.276 rmmod nvme_fabrics 00:14:39.276 rmmod nvme_keyring 00:14:39.276 01:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:39.276 01:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:14:39.276 01:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:14:39.276 01:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 839666 ']' 00:14:39.276 01:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 839666 00:14:39.276 01:15:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 839666 ']' 00:14:39.276 01:15:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 839666 00:14:39.276 01:15:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:14:39.276 01:15:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:39.276 01:15:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 839666 00:14:39.276 01:15:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:39.276 01:15:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:39.276 01:15:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 839666' 00:14:39.276 killing process with pid 839666 00:14:39.276 01:15:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 839666 00:14:39.276 01:15:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 839666 00:14:39.536 [2024-07-25 01:15:01.842776] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:39.536 01:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:39.536 01:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:39.536 01:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:39.536 01:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:39.536 01:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:39.536 01:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:39.536 01:15:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:39.536 01:15:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.446 01:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:41.446 01:15:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:14:41.446 00:14:41.446 real 0m12.340s 00:14:41.446 user 0m23.182s 00:14:41.446 sys 0m4.950s 00:14:41.446 01:15:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:41.446 01:15:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:41.446 ************************************ 00:14:41.446 END TEST nvmf_host_management 00:14:41.446 ************************************ 00:14:41.707 01:15:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:41.707 01:15:03 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:41.707 01:15:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:41.707 01:15:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:41.707 01:15:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:41.707 ************************************ 00:14:41.707 START TEST nvmf_lvol 00:14:41.707 ************************************ 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:41.707 * Looking for test storage... 00:14:41.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:14:41.707 01:15:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:47.050 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:47.050 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:14:47.050 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:47.050 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:47.050 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:47.050 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:47.050 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:47.050 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:14:47.050 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:47.050 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:14:47.050 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:14:47.050 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:14:47.050 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:14:47.050 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:14:47.050 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:47.051 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:47.051 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:47.051 Found net devices under 0000:86:00.0: cvl_0_0 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:47.051 Found net devices under 0000:86:00.1: cvl_0_1 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:47.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:47.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:14:47.051 00:14:47.051 --- 10.0.0.2 ping statistics --- 00:14:47.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.051 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:47.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:47.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:14:47.051 00:14:47.051 --- 10.0.0.1 ping statistics --- 00:14:47.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.051 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:47.051 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:47.311 01:15:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:47.311 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:47.311 01:15:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:47.311 01:15:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:47.311 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=844426 00:14:47.311 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 844426 00:14:47.311 01:15:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:47.311 01:15:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 844426 ']' 00:14:47.311 01:15:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.311 01:15:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:47.311 01:15:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.311 01:15:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:47.311 01:15:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:47.311 [2024-07-25 01:15:09.608204] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:14:47.311 [2024-07-25 01:15:09.608250] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:47.311 EAL: No free 2048 kB hugepages reported on node 1 00:14:47.311 [2024-07-25 01:15:09.667474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:47.311 [2024-07-25 01:15:09.742578] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:47.311 [2024-07-25 01:15:09.742618] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:47.311 [2024-07-25 01:15:09.742625] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:47.311 [2024-07-25 01:15:09.742631] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:47.311 [2024-07-25 01:15:09.742636] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:47.311 [2024-07-25 01:15:09.742680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:47.311 [2024-07-25 01:15:09.742778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.311 [2024-07-25 01:15:09.742780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:48.250 01:15:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:48.250 01:15:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:14:48.250 01:15:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:48.250 01:15:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:48.250 01:15:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:48.250 01:15:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:48.250 01:15:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:48.250 [2024-07-25 01:15:10.596579] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:48.250 01:15:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:48.510 01:15:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:48.510 01:15:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:48.769 01:15:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:48.769 01:15:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:48.769 01:15:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:49.028 01:15:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=61bad1ba-901a-464b-ad03-bfb8b5a0fb1d 00:14:49.028 01:15:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 61bad1ba-901a-464b-ad03-bfb8b5a0fb1d lvol 20 00:14:49.288 01:15:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=45821ac3-fd29-4464-88b8-0b671771c3e6 00:14:49.288 01:15:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:49.288 01:15:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 45821ac3-fd29-4464-88b8-0b671771c3e6 00:14:49.595 01:15:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:49.595 [2024-07-25 01:15:12.078027] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:49.855 01:15:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:49.855 01:15:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:49.855 01:15:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=844971 00:14:49.855 01:15:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:49.855 EAL: No free 2048 kB hugepages reported on node 1 00:14:50.795 01:15:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 45821ac3-fd29-4464-88b8-0b671771c3e6 MY_SNAPSHOT 00:14:51.054 01:15:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=ae280d65-0c45-43f5-a381-fc3f689db450 00:14:51.054 01:15:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 45821ac3-fd29-4464-88b8-0b671771c3e6 30 00:14:51.313 01:15:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone ae280d65-0c45-43f5-a381-fc3f689db450 MY_CLONE 00:14:51.573 01:15:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=ab8628f9-04b7-4fe1-860d-83c6dec66d1b 00:14:51.573 01:15:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate ab8628f9-04b7-4fe1-860d-83c6dec66d1b 00:14:51.833 01:15:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 844971 00:15:01.822 Initializing NVMe Controllers 00:15:01.822 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:15:01.822 Controller IO queue size 128, less than required. 00:15:01.822 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:01.822 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:15:01.822 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:15:01.822 Initialization complete. Launching workers. 00:15:01.822 ======================================================== 00:15:01.822 Latency(us) 00:15:01.822 Device Information : IOPS MiB/s Average min max 00:15:01.822 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11985.10 46.82 10686.18 932.95 65638.07 00:15:01.822 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11853.50 46.30 10803.27 3031.53 61890.09 00:15:01.822 ======================================================== 00:15:01.822 Total : 23838.60 93.12 10744.40 932.95 65638.07 00:15:01.822 00:15:01.822 01:15:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:01.822 01:15:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 45821ac3-fd29-4464-88b8-0b671771c3e6 00:15:01.822 01:15:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 61bad1ba-901a-464b-ad03-bfb8b5a0fb1d 00:15:01.822 01:15:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:15:01.822 01:15:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:15:01.822 01:15:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:15:01.822 01:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:01.822 01:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:15:01.822 01:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:01.822 01:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:15:01.822 01:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:01.822 01:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:01.822 rmmod nvme_tcp 00:15:01.822 rmmod nvme_fabrics 00:15:01.822 rmmod nvme_keyring 00:15:01.822 01:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:01.822 01:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:15:01.822 01:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:15:01.822 01:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 844426 ']' 00:15:01.822 01:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 844426 00:15:01.822 01:15:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 844426 ']' 00:15:01.822 01:15:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 844426 00:15:01.822 01:15:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:15:01.822 01:15:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:01.822 01:15:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 844426 00:15:01.822 01:15:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:01.822 01:15:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:01.822 01:15:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 844426' 00:15:01.822 killing process with pid 844426 00:15:01.822 01:15:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 844426 00:15:01.822 01:15:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 844426 00:15:01.822 01:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:01.822 01:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:01.822 01:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:01.822 01:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:01.822 01:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:01.822 01:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:01.822 01:15:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:01.822 01:15:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:03.203 01:15:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:03.203 00:15:03.203 real 0m21.565s 00:15:03.203 user 1m3.677s 00:15:03.203 sys 0m6.753s 00:15:03.203 01:15:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:03.203 01:15:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:03.203 ************************************ 00:15:03.203 END TEST nvmf_lvol 00:15:03.203 ************************************ 00:15:03.203 01:15:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:03.203 01:15:25 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:15:03.203 01:15:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:03.203 01:15:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:03.203 01:15:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:03.203 ************************************ 00:15:03.203 START TEST nvmf_lvs_grow 00:15:03.203 ************************************ 00:15:03.203 01:15:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:15:03.463 * Looking for test storage... 00:15:03.463 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:15:03.463 01:15:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:10.041 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:10.041 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:15:10.041 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:10.041 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:10.041 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:10.041 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:10.041 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:10.041 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:15:10.041 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:10.041 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:15:10.041 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:15:10.041 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:10.042 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:10.042 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:10.042 Found net devices under 0000:86:00.0: cvl_0_0 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:10.042 Found net devices under 0000:86:00.1: cvl_0_1 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:10.042 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:10.042 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:15:10.042 00:15:10.042 --- 10.0.0.2 ping statistics --- 00:15:10.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.042 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:10.042 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:10.042 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.402 ms 00:15:10.042 00:15:10.042 --- 10.0.0.1 ping statistics --- 00:15:10.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.042 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=850197 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 850197 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 850197 ']' 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:10.042 01:15:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:10.042 [2024-07-25 01:15:31.604166] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:15:10.042 [2024-07-25 01:15:31.604206] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:10.042 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.042 [2024-07-25 01:15:31.663460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.042 [2024-07-25 01:15:31.743504] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:10.043 [2024-07-25 01:15:31.743539] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:10.043 [2024-07-25 01:15:31.743546] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:10.043 [2024-07-25 01:15:31.743553] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:10.043 [2024-07-25 01:15:31.743558] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:10.043 [2024-07-25 01:15:31.743575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.043 01:15:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:10.043 01:15:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:15:10.043 01:15:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:10.043 01:15:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:10.043 01:15:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:10.043 01:15:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:10.043 01:15:32 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:10.302 [2024-07-25 01:15:32.598531] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:10.302 01:15:32 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:15:10.302 01:15:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:10.302 01:15:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:10.302 01:15:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:10.302 ************************************ 00:15:10.302 START TEST lvs_grow_clean 00:15:10.302 ************************************ 00:15:10.302 01:15:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:15:10.302 01:15:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:10.302 01:15:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:10.302 01:15:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:10.302 01:15:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:10.302 01:15:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:10.302 01:15:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:10.302 01:15:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:10.302 01:15:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:10.302 01:15:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:10.562 01:15:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:10.562 01:15:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:10.562 01:15:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=8524e3a7-3fe3-4895-9acc-d431ea3c016b 00:15:10.562 01:15:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:10.562 01:15:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8524e3a7-3fe3-4895-9acc-d431ea3c016b 00:15:10.822 01:15:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:10.822 01:15:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:10.822 01:15:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8524e3a7-3fe3-4895-9acc-d431ea3c016b lvol 150 00:15:11.081 01:15:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=17a14568-1d17-4608-a62a-d46d9cd6b601 00:15:11.081 01:15:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:11.081 01:15:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:11.081 [2024-07-25 01:15:33.548675] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:11.081 [2024-07-25 01:15:33.548729] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:11.081 true 00:15:11.081 01:15:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8524e3a7-3fe3-4895-9acc-d431ea3c016b 00:15:11.081 01:15:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:11.340 01:15:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:11.340 01:15:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:11.600 01:15:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 17a14568-1d17-4608-a62a-d46d9cd6b601 00:15:11.600 01:15:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:11.860 [2024-07-25 01:15:34.218678] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:11.860 01:15:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:12.120 01:15:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:12.120 01:15:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=850702 00:15:12.120 01:15:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:12.120 01:15:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 850702 /var/tmp/bdevperf.sock 00:15:12.120 01:15:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 850702 ']' 00:15:12.120 01:15:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:12.120 01:15:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:12.120 01:15:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:12.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:12.120 01:15:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:12.120 01:15:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:15:12.120 [2024-07-25 01:15:34.432277] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:15:12.120 [2024-07-25 01:15:34.432320] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid850702 ] 00:15:12.120 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.120 [2024-07-25 01:15:34.486048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.120 [2024-07-25 01:15:34.564072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.380 01:15:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:12.380 01:15:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:15:12.380 01:15:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:12.639 Nvme0n1 00:15:12.639 01:15:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:12.639 [ 00:15:12.639 { 00:15:12.639 "name": "Nvme0n1", 00:15:12.639 "aliases": [ 00:15:12.639 "17a14568-1d17-4608-a62a-d46d9cd6b601" 00:15:12.639 ], 00:15:12.639 "product_name": "NVMe disk", 00:15:12.639 "block_size": 4096, 00:15:12.639 "num_blocks": 38912, 00:15:12.639 "uuid": "17a14568-1d17-4608-a62a-d46d9cd6b601", 00:15:12.639 "assigned_rate_limits": { 00:15:12.639 "rw_ios_per_sec": 0, 00:15:12.639 "rw_mbytes_per_sec": 0, 00:15:12.639 "r_mbytes_per_sec": 0, 00:15:12.639 "w_mbytes_per_sec": 0 00:15:12.639 }, 00:15:12.639 "claimed": false, 00:15:12.639 "zoned": false, 00:15:12.639 "supported_io_types": { 00:15:12.639 "read": true, 00:15:12.639 "write": true, 00:15:12.639 "unmap": true, 00:15:12.639 "flush": true, 00:15:12.639 "reset": true, 00:15:12.639 "nvme_admin": true, 00:15:12.639 "nvme_io": true, 00:15:12.639 "nvme_io_md": false, 00:15:12.639 "write_zeroes": true, 00:15:12.639 "zcopy": false, 00:15:12.639 "get_zone_info": false, 00:15:12.639 "zone_management": false, 00:15:12.639 "zone_append": false, 00:15:12.639 "compare": true, 00:15:12.639 "compare_and_write": true, 00:15:12.639 "abort": true, 00:15:12.639 "seek_hole": false, 00:15:12.639 "seek_data": false, 00:15:12.639 "copy": true, 00:15:12.639 "nvme_iov_md": false 00:15:12.639 }, 00:15:12.639 "memory_domains": [ 00:15:12.639 { 00:15:12.639 "dma_device_id": "system", 00:15:12.639 "dma_device_type": 1 00:15:12.639 } 00:15:12.639 ], 00:15:12.639 "driver_specific": { 00:15:12.639 "nvme": [ 00:15:12.640 { 00:15:12.640 "trid": { 00:15:12.640 "trtype": "TCP", 00:15:12.640 "adrfam": "IPv4", 00:15:12.640 "traddr": "10.0.0.2", 00:15:12.640 "trsvcid": "4420", 00:15:12.640 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:12.640 }, 00:15:12.640 "ctrlr_data": { 00:15:12.640 "cntlid": 1, 00:15:12.640 "vendor_id": "0x8086", 00:15:12.640 "model_number": "SPDK bdev Controller", 00:15:12.640 "serial_number": "SPDK0", 00:15:12.640 "firmware_revision": "24.09", 00:15:12.640 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:12.640 "oacs": { 00:15:12.640 "security": 0, 00:15:12.640 "format": 0, 00:15:12.640 "firmware": 0, 00:15:12.640 "ns_manage": 0 00:15:12.640 }, 00:15:12.640 "multi_ctrlr": true, 00:15:12.640 "ana_reporting": false 00:15:12.640 }, 00:15:12.640 "vs": { 00:15:12.640 "nvme_version": "1.3" 00:15:12.640 }, 00:15:12.640 "ns_data": { 00:15:12.640 "id": 1, 00:15:12.640 "can_share": true 00:15:12.640 } 00:15:12.640 } 00:15:12.640 ], 00:15:12.640 "mp_policy": "active_passive" 00:15:12.640 } 00:15:12.640 } 00:15:12.640 ] 00:15:12.640 01:15:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=850929 00:15:12.640 01:15:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:12.640 01:15:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:12.900 Running I/O for 10 seconds... 00:15:13.838 Latency(us) 00:15:13.838 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.838 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:13.838 Nvme0n1 : 1.00 22037.00 86.08 0.00 0.00 0.00 0.00 0.00 00:15:13.838 =================================================================================================================== 00:15:13.838 Total : 22037.00 86.08 0.00 0.00 0.00 0.00 0.00 00:15:13.838 00:15:14.809 01:15:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8524e3a7-3fe3-4895-9acc-d431ea3c016b 00:15:14.809 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:14.809 Nvme0n1 : 2.00 22431.50 87.62 0.00 0.00 0.00 0.00 0.00 00:15:14.809 =================================================================================================================== 00:15:14.809 Total : 22431.50 87.62 0.00 0.00 0.00 0.00 0.00 00:15:14.809 00:15:14.809 true 00:15:15.069 01:15:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:15.069 01:15:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8524e3a7-3fe3-4895-9acc-d431ea3c016b 00:15:15.069 01:15:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:15.069 01:15:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:15.069 01:15:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 850929 00:15:16.008 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:16.008 Nvme0n1 : 3.00 22445.67 87.68 0.00 0.00 0.00 0.00 0.00 00:15:16.008 =================================================================================================================== 00:15:16.008 Total : 22445.67 87.68 0.00 0.00 0.00 0.00 0.00 00:15:16.008 00:15:16.945 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:16.945 Nvme0n1 : 4.00 22615.25 88.34 0.00 0.00 0.00 0.00 0.00 00:15:16.945 =================================================================================================================== 00:15:16.945 Total : 22615.25 88.34 0.00 0.00 0.00 0.00 0.00 00:15:16.945 00:15:17.885 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:17.885 Nvme0n1 : 5.00 22528.60 88.00 0.00 0.00 0.00 0.00 0.00 00:15:17.885 =================================================================================================================== 00:15:17.885 Total : 22528.60 88.00 0.00 0.00 0.00 0.00 0.00 00:15:17.885 00:15:18.823 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:18.823 Nvme0n1 : 6.00 22438.67 87.65 0.00 0.00 0.00 0.00 0.00 00:15:18.823 =================================================================================================================== 00:15:18.823 Total : 22438.67 87.65 0.00 0.00 0.00 0.00 0.00 00:15:18.823 00:15:19.761 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:19.762 Nvme0n1 : 7.00 22448.86 87.69 0.00 0.00 0.00 0.00 0.00 00:15:19.762 =================================================================================================================== 00:15:19.762 Total : 22448.86 87.69 0.00 0.00 0.00 0.00 0.00 00:15:19.762 00:15:21.141 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:21.141 Nvme0n1 : 8.00 22436.75 87.64 0.00 0.00 0.00 0.00 0.00 00:15:21.141 =================================================================================================================== 00:15:21.141 Total : 22436.75 87.64 0.00 0.00 0.00 0.00 0.00 00:15:21.141 00:15:22.078 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:22.078 Nvme0n1 : 9.00 22453.56 87.71 0.00 0.00 0.00 0.00 0.00 00:15:22.078 =================================================================================================================== 00:15:22.078 Total : 22453.56 87.71 0.00 0.00 0.00 0.00 0.00 00:15:22.078 00:15:23.017 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:23.017 Nvme0n1 : 10.00 22466.00 87.76 0.00 0.00 0.00 0.00 0.00 00:15:23.017 =================================================================================================================== 00:15:23.017 Total : 22466.00 87.76 0.00 0.00 0.00 0.00 0.00 00:15:23.017 00:15:23.017 00:15:23.017 Latency(us) 00:15:23.017 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:23.017 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:23.017 Nvme0n1 : 10.01 22462.45 87.74 0.00 0.00 5693.70 2308.01 25530.55 00:15:23.017 =================================================================================================================== 00:15:23.017 Total : 22462.45 87.74 0.00 0.00 5693.70 2308.01 25530.55 00:15:23.017 0 00:15:23.017 01:15:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 850702 00:15:23.017 01:15:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 850702 ']' 00:15:23.017 01:15:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 850702 00:15:23.017 01:15:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:15:23.017 01:15:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:23.017 01:15:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 850702 00:15:23.017 01:15:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:23.017 01:15:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:23.017 01:15:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 850702' 00:15:23.017 killing process with pid 850702 00:15:23.017 01:15:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 850702 00:15:23.017 Received shutdown signal, test time was about 10.000000 seconds 00:15:23.017 00:15:23.017 Latency(us) 00:15:23.017 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:23.017 =================================================================================================================== 00:15:23.017 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:23.017 01:15:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 850702 00:15:23.017 01:15:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:23.276 01:15:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:23.535 01:15:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8524e3a7-3fe3-4895-9acc-d431ea3c016b 00:15:23.535 01:15:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:23.795 01:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:23.795 01:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:15:23.795 01:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:23.795 [2024-07-25 01:15:46.209449] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:23.795 01:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8524e3a7-3fe3-4895-9acc-d431ea3c016b 00:15:23.795 01:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:15:23.795 01:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8524e3a7-3fe3-4895-9acc-d431ea3c016b 00:15:23.795 01:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:23.795 01:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:23.795 01:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:23.795 01:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:23.795 01:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:23.795 01:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:23.795 01:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:23.795 01:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:23.795 01:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8524e3a7-3fe3-4895-9acc-d431ea3c016b 00:15:24.055 request: 00:15:24.055 { 00:15:24.055 "uuid": "8524e3a7-3fe3-4895-9acc-d431ea3c016b", 00:15:24.055 "method": "bdev_lvol_get_lvstores", 00:15:24.055 "req_id": 1 00:15:24.055 } 00:15:24.055 Got JSON-RPC error response 00:15:24.055 response: 00:15:24.055 { 00:15:24.055 "code": -19, 00:15:24.056 "message": "No such device" 00:15:24.056 } 00:15:24.056 01:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:15:24.056 01:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:24.056 01:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:24.056 01:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:24.056 01:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:24.315 aio_bdev 00:15:24.315 01:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 17a14568-1d17-4608-a62a-d46d9cd6b601 00:15:24.315 01:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=17a14568-1d17-4608-a62a-d46d9cd6b601 00:15:24.316 01:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:24.316 01:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:15:24.316 01:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:24.316 01:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:24.316 01:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:24.316 01:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 17a14568-1d17-4608-a62a-d46d9cd6b601 -t 2000 00:15:24.575 [ 00:15:24.575 { 00:15:24.575 "name": "17a14568-1d17-4608-a62a-d46d9cd6b601", 00:15:24.575 "aliases": [ 00:15:24.575 "lvs/lvol" 00:15:24.575 ], 00:15:24.575 "product_name": "Logical Volume", 00:15:24.575 "block_size": 4096, 00:15:24.575 "num_blocks": 38912, 00:15:24.575 "uuid": "17a14568-1d17-4608-a62a-d46d9cd6b601", 00:15:24.575 "assigned_rate_limits": { 00:15:24.575 "rw_ios_per_sec": 0, 00:15:24.575 "rw_mbytes_per_sec": 0, 00:15:24.575 "r_mbytes_per_sec": 0, 00:15:24.575 "w_mbytes_per_sec": 0 00:15:24.575 }, 00:15:24.575 "claimed": false, 00:15:24.575 "zoned": false, 00:15:24.575 "supported_io_types": { 00:15:24.575 "read": true, 00:15:24.575 "write": true, 00:15:24.575 "unmap": true, 00:15:24.575 "flush": false, 00:15:24.575 "reset": true, 00:15:24.575 "nvme_admin": false, 00:15:24.575 "nvme_io": false, 00:15:24.575 "nvme_io_md": false, 00:15:24.575 "write_zeroes": true, 00:15:24.575 "zcopy": false, 00:15:24.575 "get_zone_info": false, 00:15:24.575 "zone_management": false, 00:15:24.575 "zone_append": false, 00:15:24.575 "compare": false, 00:15:24.575 "compare_and_write": false, 00:15:24.575 "abort": false, 00:15:24.575 "seek_hole": true, 00:15:24.575 "seek_data": true, 00:15:24.575 "copy": false, 00:15:24.575 "nvme_iov_md": false 00:15:24.575 }, 00:15:24.575 "driver_specific": { 00:15:24.575 "lvol": { 00:15:24.575 "lvol_store_uuid": "8524e3a7-3fe3-4895-9acc-d431ea3c016b", 00:15:24.575 "base_bdev": "aio_bdev", 00:15:24.575 "thin_provision": false, 00:15:24.575 "num_allocated_clusters": 38, 00:15:24.575 "snapshot": false, 00:15:24.575 "clone": false, 00:15:24.575 "esnap_clone": false 00:15:24.575 } 00:15:24.575 } 00:15:24.575 } 00:15:24.575 ] 00:15:24.575 01:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:15:24.575 01:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8524e3a7-3fe3-4895-9acc-d431ea3c016b 00:15:24.575 01:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:24.834 01:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:24.834 01:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8524e3a7-3fe3-4895-9acc-d431ea3c016b 00:15:24.835 01:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:24.835 01:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:24.835 01:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 17a14568-1d17-4608-a62a-d46d9cd6b601 00:15:25.094 01:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8524e3a7-3fe3-4895-9acc-d431ea3c016b 00:15:25.354 01:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:25.354 01:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:25.354 00:15:25.354 real 0m15.179s 00:15:25.354 user 0m14.661s 00:15:25.354 sys 0m1.449s 00:15:25.354 01:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:25.354 01:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:15:25.354 ************************************ 00:15:25.354 END TEST lvs_grow_clean 00:15:25.354 ************************************ 00:15:25.614 01:15:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:15:25.614 01:15:47 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:15:25.614 01:15:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:25.614 01:15:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:25.614 01:15:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:25.614 ************************************ 00:15:25.614 START TEST lvs_grow_dirty 00:15:25.614 ************************************ 00:15:25.614 01:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:15:25.614 01:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:25.614 01:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:25.614 01:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:25.614 01:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:25.614 01:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:25.614 01:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:25.614 01:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:25.614 01:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:25.614 01:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:25.614 01:15:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:25.614 01:15:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:25.873 01:15:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=4772aa96-d46f-4de7-b77a-832c80a623ab 00:15:25.873 01:15:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4772aa96-d46f-4de7-b77a-832c80a623ab 00:15:25.873 01:15:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:26.133 01:15:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:26.133 01:15:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:26.133 01:15:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4772aa96-d46f-4de7-b77a-832c80a623ab lvol 150 00:15:26.133 01:15:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=827fd9de-8f34-4aca-9df6-84c5cfa21937 00:15:26.133 01:15:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:26.133 01:15:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:26.392 [2024-07-25 01:15:48.778670] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:26.392 [2024-07-25 01:15:48.778720] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:26.392 true 00:15:26.392 01:15:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4772aa96-d46f-4de7-b77a-832c80a623ab 00:15:26.392 01:15:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:26.652 01:15:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:26.652 01:15:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:26.652 01:15:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 827fd9de-8f34-4aca-9df6-84c5cfa21937 00:15:26.912 01:15:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:27.171 [2024-07-25 01:15:49.448665] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:27.171 01:15:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:27.171 01:15:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:27.171 01:15:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=853289 00:15:27.172 01:15:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:27.172 01:15:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 853289 /var/tmp/bdevperf.sock 00:15:27.172 01:15:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 853289 ']' 00:15:27.172 01:15:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:27.172 01:15:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:27.172 01:15:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:27.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:27.172 01:15:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:27.172 01:15:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:27.172 [2024-07-25 01:15:49.662298] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:15:27.172 [2024-07-25 01:15:49.662346] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid853289 ] 00:15:27.432 EAL: No free 2048 kB hugepages reported on node 1 00:15:27.432 [2024-07-25 01:15:49.715517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.432 [2024-07-25 01:15:49.795181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:28.001 01:15:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:28.001 01:15:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:15:28.001 01:15:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:28.571 Nvme0n1 00:15:28.571 01:15:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:28.571 [ 00:15:28.571 { 00:15:28.571 "name": "Nvme0n1", 00:15:28.571 "aliases": [ 00:15:28.571 "827fd9de-8f34-4aca-9df6-84c5cfa21937" 00:15:28.571 ], 00:15:28.571 "product_name": "NVMe disk", 00:15:28.571 "block_size": 4096, 00:15:28.571 "num_blocks": 38912, 00:15:28.571 "uuid": "827fd9de-8f34-4aca-9df6-84c5cfa21937", 00:15:28.571 "assigned_rate_limits": { 00:15:28.571 "rw_ios_per_sec": 0, 00:15:28.571 "rw_mbytes_per_sec": 0, 00:15:28.571 "r_mbytes_per_sec": 0, 00:15:28.571 "w_mbytes_per_sec": 0 00:15:28.571 }, 00:15:28.571 "claimed": false, 00:15:28.571 "zoned": false, 00:15:28.571 "supported_io_types": { 00:15:28.571 "read": true, 00:15:28.571 "write": true, 00:15:28.571 "unmap": true, 00:15:28.571 "flush": true, 00:15:28.571 "reset": true, 00:15:28.571 "nvme_admin": true, 00:15:28.571 "nvme_io": true, 00:15:28.571 "nvme_io_md": false, 00:15:28.571 "write_zeroes": true, 00:15:28.571 "zcopy": false, 00:15:28.571 "get_zone_info": false, 00:15:28.571 "zone_management": false, 00:15:28.571 "zone_append": false, 00:15:28.571 "compare": true, 00:15:28.571 "compare_and_write": true, 00:15:28.571 "abort": true, 00:15:28.571 "seek_hole": false, 00:15:28.571 "seek_data": false, 00:15:28.571 "copy": true, 00:15:28.571 "nvme_iov_md": false 00:15:28.571 }, 00:15:28.571 "memory_domains": [ 00:15:28.571 { 00:15:28.571 "dma_device_id": "system", 00:15:28.571 "dma_device_type": 1 00:15:28.571 } 00:15:28.571 ], 00:15:28.571 "driver_specific": { 00:15:28.571 "nvme": [ 00:15:28.571 { 00:15:28.571 "trid": { 00:15:28.571 "trtype": "TCP", 00:15:28.571 "adrfam": "IPv4", 00:15:28.571 "traddr": "10.0.0.2", 00:15:28.572 "trsvcid": "4420", 00:15:28.572 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:28.572 }, 00:15:28.572 "ctrlr_data": { 00:15:28.572 "cntlid": 1, 00:15:28.572 "vendor_id": "0x8086", 00:15:28.572 "model_number": "SPDK bdev Controller", 00:15:28.572 "serial_number": "SPDK0", 00:15:28.572 "firmware_revision": "24.09", 00:15:28.572 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:28.572 "oacs": { 00:15:28.572 "security": 0, 00:15:28.572 "format": 0, 00:15:28.572 "firmware": 0, 00:15:28.572 "ns_manage": 0 00:15:28.572 }, 00:15:28.572 "multi_ctrlr": true, 00:15:28.572 "ana_reporting": false 00:15:28.572 }, 00:15:28.572 "vs": { 00:15:28.572 "nvme_version": "1.3" 00:15:28.572 }, 00:15:28.572 "ns_data": { 00:15:28.572 "id": 1, 00:15:28.572 "can_share": true 00:15:28.572 } 00:15:28.572 } 00:15:28.572 ], 00:15:28.572 "mp_policy": "active_passive" 00:15:28.572 } 00:15:28.572 } 00:15:28.572 ] 00:15:28.831 01:15:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=853526 00:15:28.831 01:15:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:28.831 01:15:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:28.831 Running I/O for 10 seconds... 00:15:29.771 Latency(us) 00:15:29.771 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:29.771 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:29.771 Nvme0n1 : 1.00 22162.00 86.57 0.00 0.00 0.00 0.00 0.00 00:15:29.771 =================================================================================================================== 00:15:29.771 Total : 22162.00 86.57 0.00 0.00 0.00 0.00 0.00 00:15:29.771 00:15:30.711 01:15:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4772aa96-d46f-4de7-b77a-832c80a623ab 00:15:30.711 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:30.711 Nvme0n1 : 2.00 22117.50 86.40 0.00 0.00 0.00 0.00 0.00 00:15:30.711 =================================================================================================================== 00:15:30.711 Total : 22117.50 86.40 0.00 0.00 0.00 0.00 0.00 00:15:30.711 00:15:30.971 true 00:15:30.971 01:15:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4772aa96-d46f-4de7-b77a-832c80a623ab 00:15:30.971 01:15:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:30.971 01:15:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:30.971 01:15:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:30.971 01:15:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 853526 00:15:31.911 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:31.911 Nvme0n1 : 3.00 22297.33 87.10 0.00 0.00 0.00 0.00 0.00 00:15:31.911 =================================================================================================================== 00:15:31.911 Total : 22297.33 87.10 0.00 0.00 0.00 0.00 0.00 00:15:31.911 00:15:32.888 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:32.888 Nvme0n1 : 4.00 22371.00 87.39 0.00 0.00 0.00 0.00 0.00 00:15:32.888 =================================================================================================================== 00:15:32.889 Total : 22371.00 87.39 0.00 0.00 0.00 0.00 0.00 00:15:32.889 00:15:33.829 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:33.829 Nvme0n1 : 5.00 22493.60 87.87 0.00 0.00 0.00 0.00 0.00 00:15:33.829 =================================================================================================================== 00:15:33.829 Total : 22493.60 87.87 0.00 0.00 0.00 0.00 0.00 00:15:33.829 00:15:34.770 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:34.770 Nvme0n1 : 6.00 22634.17 88.41 0.00 0.00 0.00 0.00 0.00 00:15:34.770 =================================================================================================================== 00:15:34.770 Total : 22634.17 88.41 0.00 0.00 0.00 0.00 0.00 00:15:34.770 00:15:35.718 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:35.718 Nvme0n1 : 7.00 22640.43 88.44 0.00 0.00 0.00 0.00 0.00 00:15:35.718 =================================================================================================================== 00:15:35.718 Total : 22640.43 88.44 0.00 0.00 0.00 0.00 0.00 00:15:35.718 00:15:37.098 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:37.098 Nvme0n1 : 8.00 22628.00 88.39 0.00 0.00 0.00 0.00 0.00 00:15:37.098 =================================================================================================================== 00:15:37.098 Total : 22628.00 88.39 0.00 0.00 0.00 0.00 0.00 00:15:37.098 00:15:38.038 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:38.038 Nvme0n1 : 9.00 22598.33 88.27 0.00 0.00 0.00 0.00 0.00 00:15:38.038 =================================================================================================================== 00:15:38.038 Total : 22598.33 88.27 0.00 0.00 0.00 0.00 0.00 00:15:38.038 00:15:38.979 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:38.979 Nvme0n1 : 10.00 22588.20 88.24 0.00 0.00 0.00 0.00 0.00 00:15:38.979 =================================================================================================================== 00:15:38.979 Total : 22588.20 88.24 0.00 0.00 0.00 0.00 0.00 00:15:38.979 00:15:38.979 00:15:38.979 Latency(us) 00:15:38.979 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.979 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:38.979 Nvme0n1 : 10.01 22588.73 88.24 0.00 0.00 5663.03 2892.13 12993.22 00:15:38.979 =================================================================================================================== 00:15:38.979 Total : 22588.73 88.24 0.00 0.00 5663.03 2892.13 12993.22 00:15:38.979 0 00:15:38.979 01:16:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 853289 00:15:38.979 01:16:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 853289 ']' 00:15:38.979 01:16:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 853289 00:15:38.979 01:16:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:15:38.979 01:16:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:38.979 01:16:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 853289 00:15:38.979 01:16:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:38.979 01:16:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:38.979 01:16:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 853289' 00:15:38.980 killing process with pid 853289 00:15:38.980 01:16:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 853289 00:15:38.980 Received shutdown signal, test time was about 10.000000 seconds 00:15:38.980 00:15:38.980 Latency(us) 00:15:38.980 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.980 =================================================================================================================== 00:15:38.980 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:38.980 01:16:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 853289 00:15:38.980 01:16:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:39.239 01:16:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:39.498 01:16:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4772aa96-d46f-4de7-b77a-832c80a623ab 00:15:39.498 01:16:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:39.498 01:16:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:39.498 01:16:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:15:39.498 01:16:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 850197 00:15:39.498 01:16:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 850197 00:15:39.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 850197 Killed "${NVMF_APP[@]}" "$@" 00:15:39.757 01:16:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:15:39.757 01:16:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:15:39.757 01:16:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:39.757 01:16:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:39.757 01:16:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:39.757 01:16:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=855383 00:15:39.757 01:16:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 855383 00:15:39.757 01:16:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:39.757 01:16:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 855383 ']' 00:15:39.757 01:16:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.757 01:16:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:39.757 01:16:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.757 01:16:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:39.757 01:16:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:39.757 [2024-07-25 01:16:02.056799] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:15:39.757 [2024-07-25 01:16:02.056843] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:39.757 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.757 [2024-07-25 01:16:02.115166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.757 [2024-07-25 01:16:02.185884] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:39.757 [2024-07-25 01:16:02.185924] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:39.757 [2024-07-25 01:16:02.185930] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:39.757 [2024-07-25 01:16:02.185940] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:39.757 [2024-07-25 01:16:02.185945] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:39.757 [2024-07-25 01:16:02.185963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.697 01:16:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:40.697 01:16:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:15:40.697 01:16:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:40.697 01:16:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:40.697 01:16:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:40.697 01:16:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:40.697 01:16:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:40.697 [2024-07-25 01:16:03.047319] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:40.697 [2024-07-25 01:16:03.047405] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:40.697 [2024-07-25 01:16:03.047429] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:40.697 01:16:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:15:40.697 01:16:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 827fd9de-8f34-4aca-9df6-84c5cfa21937 00:15:40.697 01:16:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=827fd9de-8f34-4aca-9df6-84c5cfa21937 00:15:40.697 01:16:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:40.697 01:16:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:15:40.697 01:16:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:40.697 01:16:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:40.697 01:16:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:40.957 01:16:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 827fd9de-8f34-4aca-9df6-84c5cfa21937 -t 2000 00:15:40.957 [ 00:15:40.957 { 00:15:40.957 "name": "827fd9de-8f34-4aca-9df6-84c5cfa21937", 00:15:40.957 "aliases": [ 00:15:40.957 "lvs/lvol" 00:15:40.957 ], 00:15:40.957 "product_name": "Logical Volume", 00:15:40.957 "block_size": 4096, 00:15:40.957 "num_blocks": 38912, 00:15:40.957 "uuid": "827fd9de-8f34-4aca-9df6-84c5cfa21937", 00:15:40.957 "assigned_rate_limits": { 00:15:40.957 "rw_ios_per_sec": 0, 00:15:40.957 "rw_mbytes_per_sec": 0, 00:15:40.957 "r_mbytes_per_sec": 0, 00:15:40.957 "w_mbytes_per_sec": 0 00:15:40.957 }, 00:15:40.957 "claimed": false, 00:15:40.957 "zoned": false, 00:15:40.957 "supported_io_types": { 00:15:40.957 "read": true, 00:15:40.957 "write": true, 00:15:40.957 "unmap": true, 00:15:40.957 "flush": false, 00:15:40.957 "reset": true, 00:15:40.957 "nvme_admin": false, 00:15:40.957 "nvme_io": false, 00:15:40.957 "nvme_io_md": false, 00:15:40.957 "write_zeroes": true, 00:15:40.957 "zcopy": false, 00:15:40.957 "get_zone_info": false, 00:15:40.957 "zone_management": false, 00:15:40.957 "zone_append": false, 00:15:40.957 "compare": false, 00:15:40.957 "compare_and_write": false, 00:15:40.957 "abort": false, 00:15:40.957 "seek_hole": true, 00:15:40.957 "seek_data": true, 00:15:40.957 "copy": false, 00:15:40.957 "nvme_iov_md": false 00:15:40.957 }, 00:15:40.957 "driver_specific": { 00:15:40.957 "lvol": { 00:15:40.957 "lvol_store_uuid": "4772aa96-d46f-4de7-b77a-832c80a623ab", 00:15:40.957 "base_bdev": "aio_bdev", 00:15:40.957 "thin_provision": false, 00:15:40.957 "num_allocated_clusters": 38, 00:15:40.957 "snapshot": false, 00:15:40.957 "clone": false, 00:15:40.957 "esnap_clone": false 00:15:40.957 } 00:15:40.957 } 00:15:40.957 } 00:15:40.957 ] 00:15:40.957 01:16:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:15:40.957 01:16:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4772aa96-d46f-4de7-b77a-832c80a623ab 00:15:40.957 01:16:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:15:41.217 01:16:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:15:41.217 01:16:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4772aa96-d46f-4de7-b77a-832c80a623ab 00:15:41.217 01:16:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:15:41.477 01:16:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:15:41.477 01:16:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:41.477 [2024-07-25 01:16:03.940097] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:41.791 01:16:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4772aa96-d46f-4de7-b77a-832c80a623ab 00:15:41.791 01:16:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:15:41.791 01:16:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4772aa96-d46f-4de7-b77a-832c80a623ab 00:15:41.791 01:16:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:41.791 01:16:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:41.791 01:16:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:41.791 01:16:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:41.791 01:16:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:41.791 01:16:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:41.791 01:16:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:41.791 01:16:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:41.791 01:16:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4772aa96-d46f-4de7-b77a-832c80a623ab 00:15:41.791 request: 00:15:41.791 { 00:15:41.791 "uuid": "4772aa96-d46f-4de7-b77a-832c80a623ab", 00:15:41.791 "method": "bdev_lvol_get_lvstores", 00:15:41.791 "req_id": 1 00:15:41.791 } 00:15:41.791 Got JSON-RPC error response 00:15:41.791 response: 00:15:41.791 { 00:15:41.791 "code": -19, 00:15:41.791 "message": "No such device" 00:15:41.791 } 00:15:41.791 01:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:15:41.791 01:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:41.791 01:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:41.791 01:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:41.791 01:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:42.051 aio_bdev 00:15:42.051 01:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 827fd9de-8f34-4aca-9df6-84c5cfa21937 00:15:42.051 01:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=827fd9de-8f34-4aca-9df6-84c5cfa21937 00:15:42.051 01:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:42.051 01:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:15:42.051 01:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:42.051 01:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:42.051 01:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:42.051 01:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 827fd9de-8f34-4aca-9df6-84c5cfa21937 -t 2000 00:15:42.311 [ 00:15:42.311 { 00:15:42.311 "name": "827fd9de-8f34-4aca-9df6-84c5cfa21937", 00:15:42.311 "aliases": [ 00:15:42.311 "lvs/lvol" 00:15:42.311 ], 00:15:42.311 "product_name": "Logical Volume", 00:15:42.311 "block_size": 4096, 00:15:42.311 "num_blocks": 38912, 00:15:42.311 "uuid": "827fd9de-8f34-4aca-9df6-84c5cfa21937", 00:15:42.311 "assigned_rate_limits": { 00:15:42.311 "rw_ios_per_sec": 0, 00:15:42.311 "rw_mbytes_per_sec": 0, 00:15:42.311 "r_mbytes_per_sec": 0, 00:15:42.311 "w_mbytes_per_sec": 0 00:15:42.311 }, 00:15:42.311 "claimed": false, 00:15:42.311 "zoned": false, 00:15:42.311 "supported_io_types": { 00:15:42.311 "read": true, 00:15:42.311 "write": true, 00:15:42.311 "unmap": true, 00:15:42.311 "flush": false, 00:15:42.311 "reset": true, 00:15:42.311 "nvme_admin": false, 00:15:42.311 "nvme_io": false, 00:15:42.311 "nvme_io_md": false, 00:15:42.311 "write_zeroes": true, 00:15:42.311 "zcopy": false, 00:15:42.311 "get_zone_info": false, 00:15:42.311 "zone_management": false, 00:15:42.311 "zone_append": false, 00:15:42.311 "compare": false, 00:15:42.311 "compare_and_write": false, 00:15:42.311 "abort": false, 00:15:42.311 "seek_hole": true, 00:15:42.311 "seek_data": true, 00:15:42.311 "copy": false, 00:15:42.311 "nvme_iov_md": false 00:15:42.311 }, 00:15:42.311 "driver_specific": { 00:15:42.311 "lvol": { 00:15:42.311 "lvol_store_uuid": "4772aa96-d46f-4de7-b77a-832c80a623ab", 00:15:42.311 "base_bdev": "aio_bdev", 00:15:42.311 "thin_provision": false, 00:15:42.311 "num_allocated_clusters": 38, 00:15:42.311 "snapshot": false, 00:15:42.311 "clone": false, 00:15:42.311 "esnap_clone": false 00:15:42.311 } 00:15:42.311 } 00:15:42.311 } 00:15:42.311 ] 00:15:42.312 01:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:15:42.312 01:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4772aa96-d46f-4de7-b77a-832c80a623ab 00:15:42.312 01:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:42.571 01:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:42.571 01:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4772aa96-d46f-4de7-b77a-832c80a623ab 00:15:42.571 01:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:42.571 01:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:42.571 01:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 827fd9de-8f34-4aca-9df6-84c5cfa21937 00:15:42.832 01:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4772aa96-d46f-4de7-b77a-832c80a623ab 00:15:43.093 01:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:43.093 01:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:43.093 00:15:43.093 real 0m17.677s 00:15:43.093 user 0m44.363s 00:15:43.093 sys 0m4.083s 00:15:43.093 01:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:43.093 01:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:43.093 ************************************ 00:15:43.093 END TEST lvs_grow_dirty 00:15:43.093 ************************************ 00:15:43.354 01:16:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:15:43.354 01:16:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:43.354 01:16:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:15:43.354 01:16:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:15:43.354 01:16:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:43.354 01:16:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:43.354 01:16:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:43.354 01:16:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:43.354 01:16:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:43.354 01:16:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:43.354 nvmf_trace.0 00:15:43.354 01:16:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:15:43.354 01:16:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:43.354 01:16:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:43.354 01:16:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:15:43.354 01:16:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:43.354 01:16:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:15:43.354 01:16:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:43.354 01:16:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:43.354 rmmod nvme_tcp 00:15:43.354 rmmod nvme_fabrics 00:15:43.354 rmmod nvme_keyring 00:15:43.354 01:16:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:43.354 01:16:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:15:43.354 01:16:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:15:43.354 01:16:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 855383 ']' 00:15:43.354 01:16:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 855383 00:15:43.354 01:16:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 855383 ']' 00:15:43.354 01:16:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 855383 00:15:43.354 01:16:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:15:43.354 01:16:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:43.354 01:16:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 855383 00:15:43.354 01:16:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:43.354 01:16:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:43.354 01:16:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 855383' 00:15:43.354 killing process with pid 855383 00:15:43.354 01:16:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 855383 00:15:43.354 01:16:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 855383 00:15:43.614 01:16:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:43.614 01:16:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:43.614 01:16:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:43.614 01:16:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:43.614 01:16:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:43.614 01:16:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.614 01:16:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:43.614 01:16:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.525 01:16:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:45.525 00:15:45.525 real 0m42.362s 00:15:45.525 user 1m4.869s 00:15:45.525 sys 0m10.353s 00:15:45.525 01:16:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:45.525 01:16:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:45.525 ************************************ 00:15:45.525 END TEST nvmf_lvs_grow 00:15:45.525 ************************************ 00:15:45.785 01:16:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:45.785 01:16:08 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:45.785 01:16:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:45.785 01:16:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:45.785 01:16:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:45.785 ************************************ 00:15:45.785 START TEST nvmf_bdev_io_wait 00:15:45.785 ************************************ 00:15:45.785 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:45.785 * Looking for test storage... 00:15:45.785 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:45.785 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:45.785 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:15:45.785 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:45.785 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:45.785 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:45.785 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:45.785 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:45.785 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:45.785 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:45.785 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:45.785 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:45.785 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:45.785 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:45.785 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:45.785 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:45.785 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:45.785 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:45.785 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:45.785 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:45.785 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:45.785 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:45.785 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:45.785 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.785 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.785 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.785 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:15:45.786 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.786 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:15:45.786 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:45.786 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:45.786 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:45.786 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:45.786 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:45.786 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:45.786 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:45.786 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:45.786 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:45.786 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:45.786 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:45.786 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:45.786 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:45.786 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:45.786 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:45.786 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:45.786 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.786 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:45.786 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.786 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:45.786 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:45.786 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:15:45.786 01:16:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:51.071 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:51.071 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:51.071 Found net devices under 0000:86:00.0: cvl_0_0 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:51.071 Found net devices under 0000:86:00.1: cvl_0_1 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:51.071 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:51.071 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:15:51.071 00:15:51.071 --- 10.0.0.2 ping statistics --- 00:15:51.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.071 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:51.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:51.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:15:51.071 00:15:51.071 --- 10.0.0.1 ping statistics --- 00:15:51.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.071 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=859439 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 859439 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 859439 ']' 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:51.071 01:16:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:51.071 [2024-07-25 01:16:13.506192] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:15:51.071 [2024-07-25 01:16:13.506238] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:51.071 EAL: No free 2048 kB hugepages reported on node 1 00:15:51.332 [2024-07-25 01:16:13.564221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:51.332 [2024-07-25 01:16:13.648193] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:51.332 [2024-07-25 01:16:13.648231] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:51.332 [2024-07-25 01:16:13.648239] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:51.332 [2024-07-25 01:16:13.648245] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:51.332 [2024-07-25 01:16:13.648250] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:51.332 [2024-07-25 01:16:13.648290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:51.332 [2024-07-25 01:16:13.648308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:51.332 [2024-07-25 01:16:13.648408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:51.332 [2024-07-25 01:16:13.648409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.901 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:51.901 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:15:51.901 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:51.901 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:51.901 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:51.901 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:51.901 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:51.901 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.901 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:51.901 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.901 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:51.902 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.902 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:52.162 [2024-07-25 01:16:14.417804] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:52.162 Malloc0 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:52.162 [2024-07-25 01:16:14.474193] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=859672 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=859674 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:52.162 { 00:15:52.162 "params": { 00:15:52.162 "name": "Nvme$subsystem", 00:15:52.162 "trtype": "$TEST_TRANSPORT", 00:15:52.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:52.162 "adrfam": "ipv4", 00:15:52.162 "trsvcid": "$NVMF_PORT", 00:15:52.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:52.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:52.162 "hdgst": ${hdgst:-false}, 00:15:52.162 "ddgst": ${ddgst:-false} 00:15:52.162 }, 00:15:52.162 "method": "bdev_nvme_attach_controller" 00:15:52.162 } 00:15:52.162 EOF 00:15:52.162 )") 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=859676 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:52.162 { 00:15:52.162 "params": { 00:15:52.162 "name": "Nvme$subsystem", 00:15:52.162 "trtype": "$TEST_TRANSPORT", 00:15:52.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:52.162 "adrfam": "ipv4", 00:15:52.162 "trsvcid": "$NVMF_PORT", 00:15:52.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:52.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:52.162 "hdgst": ${hdgst:-false}, 00:15:52.162 "ddgst": ${ddgst:-false} 00:15:52.162 }, 00:15:52.162 "method": "bdev_nvme_attach_controller" 00:15:52.162 } 00:15:52.162 EOF 00:15:52.162 )") 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=859679 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:52.162 { 00:15:52.162 "params": { 00:15:52.162 "name": "Nvme$subsystem", 00:15:52.162 "trtype": "$TEST_TRANSPORT", 00:15:52.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:52.162 "adrfam": "ipv4", 00:15:52.162 "trsvcid": "$NVMF_PORT", 00:15:52.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:52.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:52.162 "hdgst": ${hdgst:-false}, 00:15:52.162 "ddgst": ${ddgst:-false} 00:15:52.162 }, 00:15:52.162 "method": "bdev_nvme_attach_controller" 00:15:52.162 } 00:15:52.162 EOF 00:15:52.162 )") 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:52.162 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:52.163 { 00:15:52.163 "params": { 00:15:52.163 "name": "Nvme$subsystem", 00:15:52.163 "trtype": "$TEST_TRANSPORT", 00:15:52.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:52.163 "adrfam": "ipv4", 00:15:52.163 "trsvcid": "$NVMF_PORT", 00:15:52.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:52.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:52.163 "hdgst": ${hdgst:-false}, 00:15:52.163 "ddgst": ${ddgst:-false} 00:15:52.163 }, 00:15:52.163 "method": "bdev_nvme_attach_controller" 00:15:52.163 } 00:15:52.163 EOF 00:15:52.163 )") 00:15:52.163 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:52.163 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:52.163 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 859672 00:15:52.163 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:52.163 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:52.163 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:52.163 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:52.163 "params": { 00:15:52.163 "name": "Nvme1", 00:15:52.163 "trtype": "tcp", 00:15:52.163 "traddr": "10.0.0.2", 00:15:52.163 "adrfam": "ipv4", 00:15:52.163 "trsvcid": "4420", 00:15:52.163 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:52.163 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:52.163 "hdgst": false, 00:15:52.163 "ddgst": false 00:15:52.163 }, 00:15:52.163 "method": "bdev_nvme_attach_controller" 00:15:52.163 }' 00:15:52.163 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:52.163 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:52.163 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:52.163 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:52.163 "params": { 00:15:52.163 "name": "Nvme1", 00:15:52.163 "trtype": "tcp", 00:15:52.163 "traddr": "10.0.0.2", 00:15:52.163 "adrfam": "ipv4", 00:15:52.163 "trsvcid": "4420", 00:15:52.163 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:52.163 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:52.163 "hdgst": false, 00:15:52.163 "ddgst": false 00:15:52.163 }, 00:15:52.163 "method": "bdev_nvme_attach_controller" 00:15:52.163 }' 00:15:52.163 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:52.163 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:52.163 "params": { 00:15:52.163 "name": "Nvme1", 00:15:52.163 "trtype": "tcp", 00:15:52.163 "traddr": "10.0.0.2", 00:15:52.163 "adrfam": "ipv4", 00:15:52.163 "trsvcid": "4420", 00:15:52.163 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:52.163 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:52.163 "hdgst": false, 00:15:52.163 "ddgst": false 00:15:52.163 }, 00:15:52.163 "method": "bdev_nvme_attach_controller" 00:15:52.163 }' 00:15:52.163 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:52.163 01:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:52.163 "params": { 00:15:52.163 "name": "Nvme1", 00:15:52.163 "trtype": "tcp", 00:15:52.163 "traddr": "10.0.0.2", 00:15:52.163 "adrfam": "ipv4", 00:15:52.163 "trsvcid": "4420", 00:15:52.163 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:52.163 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:52.163 "hdgst": false, 00:15:52.163 "ddgst": false 00:15:52.163 }, 00:15:52.163 "method": "bdev_nvme_attach_controller" 00:15:52.163 }' 00:15:52.163 [2024-07-25 01:16:14.523836] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:15:52.163 [2024-07-25 01:16:14.523880] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:52.163 [2024-07-25 01:16:14.526337] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:15:52.163 [2024-07-25 01:16:14.526344] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:15:52.163 [2024-07-25 01:16:14.526389] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-25 01:16:14.526389] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:52.163 --proc-type=auto ] 00:15:52.163 [2024-07-25 01:16:14.529454] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:15:52.163 [2024-07-25 01:16:14.529495] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:52.163 EAL: No free 2048 kB hugepages reported on node 1 00:15:52.163 EAL: No free 2048 kB hugepages reported on node 1 00:15:52.423 [2024-07-25 01:16:14.672478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.423 EAL: No free 2048 kB hugepages reported on node 1 00:15:52.423 [2024-07-25 01:16:14.741580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:52.423 [2024-07-25 01:16:14.762143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.423 EAL: No free 2048 kB hugepages reported on node 1 00:15:52.423 [2024-07-25 01:16:14.838476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:15:52.423 [2024-07-25 01:16:14.859129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.423 [2024-07-25 01:16:14.911955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.682 [2024-07-25 01:16:14.947836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:52.682 [2024-07-25 01:16:14.989819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:15:52.941 Running I/O for 1 seconds... 00:15:52.941 Running I/O for 1 seconds... 00:15:52.941 Running I/O for 1 seconds... 00:15:52.941 Running I/O for 1 seconds... 00:15:53.879 00:15:53.879 Latency(us) 00:15:53.879 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:53.879 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:53.879 Nvme1n1 : 1.01 12250.61 47.85 0.00 0.00 10408.74 3846.68 22225.25 00:15:53.879 =================================================================================================================== 00:15:53.879 Total : 12250.61 47.85 0.00 0.00 10408.74 3846.68 22225.25 00:15:53.879 00:15:53.879 Latency(us) 00:15:53.879 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:53.879 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:53.879 Nvme1n1 : 1.00 244285.55 954.24 0.00 0.00 521.61 206.58 687.42 00:15:53.879 =================================================================================================================== 00:15:53.879 Total : 244285.55 954.24 0.00 0.00 521.61 206.58 687.42 00:15:53.879 00:15:53.879 Latency(us) 00:15:53.879 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:53.879 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:53.879 Nvme1n1 : 1.01 9553.19 37.32 0.00 0.00 13354.43 6525.11 39663.53 00:15:53.879 =================================================================================================================== 00:15:53.879 Total : 9553.19 37.32 0.00 0.00 13354.43 6525.11 39663.53 00:15:53.879 00:15:53.879 Latency(us) 00:15:53.879 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:53.879 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:53.879 Nvme1n1 : 1.01 11369.76 44.41 0.00 0.00 11217.51 5100.41 23934.89 00:15:53.879 =================================================================================================================== 00:15:53.879 Total : 11369.76 44.41 0.00 0.00 11217.51 5100.41 23934.89 00:15:54.142 01:16:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 859674 00:15:54.142 01:16:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 859676 00:15:54.142 01:16:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 859679 00:15:54.142 01:16:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:54.142 01:16:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.142 01:16:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:54.143 01:16:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.143 01:16:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:54.143 01:16:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:54.143 01:16:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:54.143 01:16:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:15:54.143 01:16:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:54.143 01:16:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:15:54.143 01:16:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:54.143 01:16:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:54.143 rmmod nvme_tcp 00:15:54.143 rmmod nvme_fabrics 00:15:54.143 rmmod nvme_keyring 00:15:54.143 01:16:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:54.143 01:16:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:15:54.143 01:16:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:15:54.143 01:16:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 859439 ']' 00:15:54.143 01:16:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 859439 00:15:54.143 01:16:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 859439 ']' 00:15:54.143 01:16:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 859439 00:15:54.143 01:16:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:15:54.143 01:16:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:54.143 01:16:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 859439 00:15:54.403 01:16:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:54.403 01:16:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:54.403 01:16:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 859439' 00:15:54.403 killing process with pid 859439 00:15:54.403 01:16:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 859439 00:15:54.403 01:16:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 859439 00:15:54.403 01:16:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:54.403 01:16:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:54.403 01:16:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:54.403 01:16:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:54.403 01:16:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:54.403 01:16:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.403 01:16:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:54.403 01:16:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.946 01:16:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:56.946 00:15:56.946 real 0m10.817s 00:15:56.946 user 0m19.861s 00:15:56.946 sys 0m5.725s 00:15:56.946 01:16:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:56.946 01:16:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:56.946 ************************************ 00:15:56.946 END TEST nvmf_bdev_io_wait 00:15:56.946 ************************************ 00:15:56.946 01:16:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:56.946 01:16:18 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:56.946 01:16:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:56.946 01:16:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:56.946 01:16:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:56.946 ************************************ 00:15:56.946 START TEST nvmf_queue_depth 00:15:56.946 ************************************ 00:15:56.946 01:16:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:56.946 * Looking for test storage... 00:15:56.946 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:56.946 01:16:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:56.946 01:16:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:15:56.946 01:16:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:56.946 01:16:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:56.946 01:16:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:56.946 01:16:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:56.946 01:16:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:56.946 01:16:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:56.946 01:16:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:56.946 01:16:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:56.946 01:16:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:56.946 01:16:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:56.946 01:16:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:56.946 01:16:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:56.946 01:16:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:56.946 01:16:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:56.946 01:16:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:56.946 01:16:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:56.946 01:16:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:56.946 01:16:19 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:56.946 01:16:19 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:56.946 01:16:19 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:56.946 01:16:19 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.947 01:16:19 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.947 01:16:19 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.947 01:16:19 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:15:56.947 01:16:19 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.947 01:16:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:15:56.947 01:16:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:56.947 01:16:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:56.947 01:16:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:56.947 01:16:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:56.947 01:16:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:56.947 01:16:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:56.947 01:16:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:56.947 01:16:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:56.947 01:16:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:56.947 01:16:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:56.947 01:16:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:56.947 01:16:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:56.947 01:16:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:56.947 01:16:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:56.947 01:16:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:56.947 01:16:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:56.947 01:16:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:56.947 01:16:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.947 01:16:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:56.947 01:16:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.947 01:16:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:56.947 01:16:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:56.947 01:16:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:15:56.947 01:16:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:02.230 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:02.230 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:02.230 Found net devices under 0000:86:00.0: cvl_0_0 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:02.230 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:02.231 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:02.231 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:02.231 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:02.231 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:02.231 Found net devices under 0000:86:00.1: cvl_0_1 00:16:02.231 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:02.231 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:02.231 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:16:02.231 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:02.231 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:02.231 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:02.231 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:02.231 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:02.231 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:02.231 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:02.231 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:02.231 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:02.231 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:02.231 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:02.231 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:02.231 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:02.231 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:02.231 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:02.231 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:02.231 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:02.231 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:02.231 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:02.231 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:02.231 01:16:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:02.231 01:16:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:02.231 01:16:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:02.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:02.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:16:02.231 00:16:02.231 --- 10.0.0.2 ping statistics --- 00:16:02.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.231 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:16:02.231 01:16:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:02.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:02.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:16:02.231 00:16:02.231 --- 10.0.0.1 ping statistics --- 00:16:02.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.231 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:16:02.231 01:16:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:02.231 01:16:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:16:02.231 01:16:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:02.231 01:16:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:02.231 01:16:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:02.231 01:16:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:02.231 01:16:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:02.231 01:16:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:02.231 01:16:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:02.231 01:16:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:16:02.231 01:16:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:02.231 01:16:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:02.231 01:16:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:02.231 01:16:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=863455 00:16:02.231 01:16:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 863455 00:16:02.231 01:16:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:02.231 01:16:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 863455 ']' 00:16:02.231 01:16:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.231 01:16:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:02.231 01:16:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.231 01:16:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:02.231 01:16:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:02.231 [2024-07-25 01:16:24.117842] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:16:02.231 [2024-07-25 01:16:24.117883] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:02.231 EAL: No free 2048 kB hugepages reported on node 1 00:16:02.231 [2024-07-25 01:16:24.174871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.231 [2024-07-25 01:16:24.251117] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:02.231 [2024-07-25 01:16:24.251155] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:02.231 [2024-07-25 01:16:24.251162] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:02.231 [2024-07-25 01:16:24.251168] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:02.231 [2024-07-25 01:16:24.251173] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:02.231 [2024-07-25 01:16:24.251189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:02.492 01:16:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:02.492 01:16:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:16:02.492 01:16:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:02.492 01:16:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:02.492 01:16:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:02.492 01:16:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:02.492 01:16:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:02.492 01:16:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.492 01:16:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:02.492 [2024-07-25 01:16:24.956371] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:02.492 01:16:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.492 01:16:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:02.492 01:16:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.492 01:16:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:02.753 Malloc0 00:16:02.753 01:16:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.753 01:16:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:02.753 01:16:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.753 01:16:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:02.753 01:16:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.753 01:16:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:02.753 01:16:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.753 01:16:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:02.753 01:16:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.753 01:16:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:02.753 01:16:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.753 01:16:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:02.753 [2024-07-25 01:16:25.009023] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:02.753 01:16:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.753 01:16:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=863698 00:16:02.753 01:16:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:02.753 01:16:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 863698 /var/tmp/bdevperf.sock 00:16:02.753 01:16:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 863698 ']' 00:16:02.753 01:16:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:02.753 01:16:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:02.753 01:16:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:02.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:02.753 01:16:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:16:02.753 01:16:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:02.753 01:16:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:02.753 [2024-07-25 01:16:25.054910] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:16:02.753 [2024-07-25 01:16:25.054951] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid863698 ] 00:16:02.753 EAL: No free 2048 kB hugepages reported on node 1 00:16:02.754 [2024-07-25 01:16:25.108036] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.754 [2024-07-25 01:16:25.180612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.694 01:16:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:03.694 01:16:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:16:03.694 01:16:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:03.694 01:16:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.694 01:16:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:03.694 NVMe0n1 00:16:03.694 01:16:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.694 01:16:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:03.694 Running I/O for 10 seconds... 00:16:15.917 00:16:15.917 Latency(us) 00:16:15.917 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:15.917 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:16:15.917 Verification LBA range: start 0x0 length 0x4000 00:16:15.917 NVMe0n1 : 10.09 12091.74 47.23 0.00 0.00 84047.07 13563.10 63826.37 00:16:15.917 =================================================================================================================== 00:16:15.917 Total : 12091.74 47.23 0.00 0.00 84047.07 13563.10 63826.37 00:16:15.917 0 00:16:15.917 01:16:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 863698 00:16:15.917 01:16:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 863698 ']' 00:16:15.917 01:16:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 863698 00:16:15.917 01:16:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:16:15.917 01:16:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:15.917 01:16:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 863698 00:16:15.917 01:16:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:15.917 01:16:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:15.917 01:16:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 863698' 00:16:15.917 killing process with pid 863698 00:16:15.917 01:16:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 863698 00:16:15.917 Received shutdown signal, test time was about 10.000000 seconds 00:16:15.917 00:16:15.917 Latency(us) 00:16:15.917 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:15.917 =================================================================================================================== 00:16:15.917 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:15.917 01:16:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 863698 00:16:15.917 01:16:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:15.917 01:16:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:16:15.917 01:16:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:15.917 01:16:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:16:15.917 01:16:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:15.917 01:16:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:16:15.917 01:16:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:15.917 01:16:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:15.917 rmmod nvme_tcp 00:16:15.917 rmmod nvme_fabrics 00:16:15.917 rmmod nvme_keyring 00:16:15.917 01:16:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:15.917 01:16:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:16:15.917 01:16:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:16:15.917 01:16:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 863455 ']' 00:16:15.917 01:16:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 863455 00:16:15.917 01:16:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 863455 ']' 00:16:15.917 01:16:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 863455 00:16:15.917 01:16:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:16:15.917 01:16:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:15.917 01:16:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 863455 00:16:15.917 01:16:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:15.917 01:16:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:15.917 01:16:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 863455' 00:16:15.917 killing process with pid 863455 00:16:15.917 01:16:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 863455 00:16:15.917 01:16:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 863455 00:16:15.917 01:16:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:15.917 01:16:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:15.917 01:16:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:15.917 01:16:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:15.917 01:16:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:15.917 01:16:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.917 01:16:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:15.917 01:16:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.488 01:16:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:16.488 00:16:16.488 real 0m19.942s 00:16:16.488 user 0m24.867s 00:16:16.488 sys 0m5.314s 00:16:16.488 01:16:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:16.488 01:16:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:16.488 ************************************ 00:16:16.488 END TEST nvmf_queue_depth 00:16:16.488 ************************************ 00:16:16.488 01:16:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:16.488 01:16:38 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:16.488 01:16:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:16.488 01:16:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:16.488 01:16:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:16.488 ************************************ 00:16:16.488 START TEST nvmf_target_multipath 00:16:16.488 ************************************ 00:16:16.488 01:16:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:16.748 * Looking for test storage... 00:16:16.748 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:16.748 01:16:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:16.748 01:16:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:16:16.748 01:16:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:16.748 01:16:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:16.748 01:16:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:16.748 01:16:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:16.748 01:16:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:16.748 01:16:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:16.748 01:16:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:16.749 01:16:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:16.749 01:16:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:16.749 01:16:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:16.749 01:16:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:16.749 01:16:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:16.749 01:16:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:16.749 01:16:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:16.749 01:16:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:16.749 01:16:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:16.749 01:16:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:16.749 01:16:39 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:16.749 01:16:39 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:16.749 01:16:39 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:16.749 01:16:39 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.749 01:16:39 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.749 01:16:39 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.749 01:16:39 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:16:16.749 01:16:39 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.749 01:16:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:16:16.749 01:16:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:16.749 01:16:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:16.749 01:16:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:16.749 01:16:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:16.749 01:16:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:16.749 01:16:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:16.749 01:16:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:16.749 01:16:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:16.749 01:16:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:16.749 01:16:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:16.749 01:16:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:16.749 01:16:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:16.749 01:16:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:16:16.749 01:16:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:16.749 01:16:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:16.749 01:16:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:16.749 01:16:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:16.749 01:16:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:16.749 01:16:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.749 01:16:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:16.749 01:16:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.749 01:16:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:16.749 01:16:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:16.749 01:16:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:16:16.749 01:16:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:22.032 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:22.032 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:16:22.032 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:22.032 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:22.032 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:22.032 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:22.032 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:22.032 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:16:22.032 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:22.032 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:16:22.032 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:16:22.032 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:16:22.032 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:16:22.032 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:16:22.032 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:16:22.032 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:22.032 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:22.033 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:22.033 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:22.033 Found net devices under 0000:86:00.0: cvl_0_0 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:22.033 Found net devices under 0000:86:00.1: cvl_0_1 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:22.033 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:22.034 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:22.034 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:22.034 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:16:22.034 00:16:22.034 --- 10.0.0.2 ping statistics --- 00:16:22.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.034 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:16:22.034 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:22.034 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:22.034 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.357 ms 00:16:22.034 00:16:22.034 --- 10.0.0.1 ping statistics --- 00:16:22.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.034 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:16:22.034 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:22.034 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:16:22.034 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:22.034 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:22.034 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:22.034 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:22.034 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:22.034 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:22.034 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:22.034 01:16:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:16:22.034 01:16:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:16:22.034 only one NIC for nvmf test 00:16:22.034 01:16:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:16:22.034 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:22.034 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:22.034 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:22.034 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:22.034 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:22.034 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:22.034 rmmod nvme_tcp 00:16:22.034 rmmod nvme_fabrics 00:16:22.034 rmmod nvme_keyring 00:16:22.034 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:22.034 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:22.295 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:22.295 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:22.295 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:22.295 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:22.295 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:22.295 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:22.295 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:22.295 01:16:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.295 01:16:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:22.295 01:16:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.273 01:16:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:24.273 01:16:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:16:24.273 01:16:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:16:24.273 01:16:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:24.273 01:16:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:24.273 01:16:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:24.273 01:16:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:24.273 01:16:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:24.273 01:16:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:24.273 01:16:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:24.273 01:16:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:24.273 01:16:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:24.273 01:16:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:24.273 01:16:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:24.273 01:16:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:24.273 01:16:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:24.273 01:16:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:24.273 01:16:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:24.273 01:16:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.273 01:16:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:24.273 01:16:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.273 01:16:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:24.273 00:16:24.273 real 0m7.658s 00:16:24.273 user 0m1.566s 00:16:24.273 sys 0m4.088s 00:16:24.273 01:16:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:24.273 01:16:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:24.273 ************************************ 00:16:24.273 END TEST nvmf_target_multipath 00:16:24.273 ************************************ 00:16:24.273 01:16:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:24.273 01:16:46 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:24.273 01:16:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:24.273 01:16:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:24.273 01:16:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:24.273 ************************************ 00:16:24.273 START TEST nvmf_zcopy 00:16:24.273 ************************************ 00:16:24.274 01:16:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:24.534 * Looking for test storage... 00:16:24.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:16:24.534 01:16:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:29.821 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:29.821 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:29.821 Found net devices under 0000:86:00.0: cvl_0_0 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:29.821 Found net devices under 0000:86:00.1: cvl_0_1 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:29.821 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:30.082 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:30.082 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:30.082 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:30.082 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:30.082 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:30.082 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:30.082 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:30.082 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:30.082 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:16:30.082 00:16:30.082 --- 10.0.0.2 ping statistics --- 00:16:30.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.082 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:16:30.082 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:30.082 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:30.082 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:16:30.082 00:16:30.082 --- 10.0.0.1 ping statistics --- 00:16:30.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.082 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:16:30.082 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:30.082 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:16:30.082 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:30.082 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:30.082 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:30.082 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:30.082 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:30.082 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:30.082 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:30.082 01:16:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:16:30.082 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:30.082 01:16:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:30.082 01:16:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:30.082 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=872441 00:16:30.082 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 872441 00:16:30.082 01:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:30.082 01:16:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 872441 ']' 00:16:30.082 01:16:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.082 01:16:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:30.082 01:16:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.082 01:16:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:30.082 01:16:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:30.343 [2024-07-25 01:16:52.595767] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:16:30.343 [2024-07-25 01:16:52.595816] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:30.343 EAL: No free 2048 kB hugepages reported on node 1 00:16:30.343 [2024-07-25 01:16:52.651823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.343 [2024-07-25 01:16:52.728140] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:30.343 [2024-07-25 01:16:52.728179] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:30.343 [2024-07-25 01:16:52.728186] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:30.343 [2024-07-25 01:16:52.728192] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:30.343 [2024-07-25 01:16:52.728197] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:30.343 [2024-07-25 01:16:52.728218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:30.913 01:16:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:30.913 01:16:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:16:30.913 01:16:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:30.913 01:16:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:30.913 01:16:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:31.173 01:16:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:31.173 01:16:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:16:31.173 01:16:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:16:31.173 01:16:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.173 01:16:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:31.173 [2024-07-25 01:16:53.443308] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:31.173 01:16:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.173 01:16:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:31.173 01:16:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.173 01:16:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:31.173 01:16:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.173 01:16:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:31.173 01:16:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.173 01:16:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:31.173 [2024-07-25 01:16:53.459471] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:31.173 01:16:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.173 01:16:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:31.173 01:16:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.173 01:16:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:31.173 01:16:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.173 01:16:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:16:31.173 01:16:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.173 01:16:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:31.173 malloc0 00:16:31.173 01:16:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.173 01:16:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:31.173 01:16:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.173 01:16:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:31.173 01:16:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.173 01:16:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:16:31.173 01:16:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:16:31.173 01:16:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:31.173 01:16:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:31.173 01:16:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:31.173 01:16:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:31.173 { 00:16:31.173 "params": { 00:16:31.173 "name": "Nvme$subsystem", 00:16:31.173 "trtype": "$TEST_TRANSPORT", 00:16:31.173 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:31.173 "adrfam": "ipv4", 00:16:31.173 "trsvcid": "$NVMF_PORT", 00:16:31.173 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:31.173 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:31.173 "hdgst": ${hdgst:-false}, 00:16:31.173 "ddgst": ${ddgst:-false} 00:16:31.173 }, 00:16:31.174 "method": "bdev_nvme_attach_controller" 00:16:31.174 } 00:16:31.174 EOF 00:16:31.174 )") 00:16:31.174 01:16:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:31.174 01:16:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:31.174 01:16:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:31.174 01:16:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:31.174 "params": { 00:16:31.174 "name": "Nvme1", 00:16:31.174 "trtype": "tcp", 00:16:31.174 "traddr": "10.0.0.2", 00:16:31.174 "adrfam": "ipv4", 00:16:31.174 "trsvcid": "4420", 00:16:31.174 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:31.174 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:31.174 "hdgst": false, 00:16:31.174 "ddgst": false 00:16:31.174 }, 00:16:31.174 "method": "bdev_nvme_attach_controller" 00:16:31.174 }' 00:16:31.174 [2024-07-25 01:16:53.534407] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:16:31.174 [2024-07-25 01:16:53.534449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid872595 ] 00:16:31.174 EAL: No free 2048 kB hugepages reported on node 1 00:16:31.174 [2024-07-25 01:16:53.588446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.174 [2024-07-25 01:16:53.663155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.743 Running I/O for 10 seconds... 00:16:41.736 00:16:41.736 Latency(us) 00:16:41.736 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:41.736 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:41.736 Verification LBA range: start 0x0 length 0x1000 00:16:41.736 Nvme1n1 : 10.01 7509.64 58.67 0.00 0.00 17001.76 623.30 47413.87 00:16:41.736 =================================================================================================================== 00:16:41.736 Total : 7509.64 58.67 0.00 0.00 17001.76 623.30 47413.87 00:16:41.736 01:17:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=874415 00:16:41.736 01:17:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:16:41.736 01:17:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:41.736 01:17:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:41.736 01:17:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:41.736 01:17:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:41.736 01:17:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:41.736 01:17:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:41.736 01:17:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:41.736 { 00:16:41.736 "params": { 00:16:41.736 "name": "Nvme$subsystem", 00:16:41.736 "trtype": "$TEST_TRANSPORT", 00:16:41.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:41.736 "adrfam": "ipv4", 00:16:41.736 "trsvcid": "$NVMF_PORT", 00:16:41.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:41.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:41.736 "hdgst": ${hdgst:-false}, 00:16:41.736 "ddgst": ${ddgst:-false} 00:16:41.736 }, 00:16:41.736 "method": "bdev_nvme_attach_controller" 00:16:41.736 } 00:16:41.736 EOF 00:16:41.736 )") 00:16:41.736 [2024-07-25 01:17:04.212110] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.736 [2024-07-25 01:17:04.212147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.736 01:17:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:41.736 01:17:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:41.736 [2024-07-25 01:17:04.220098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.736 [2024-07-25 01:17:04.220111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.736 01:17:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:41.736 01:17:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:41.736 "params": { 00:16:41.736 "name": "Nvme1", 00:16:41.736 "trtype": "tcp", 00:16:41.736 "traddr": "10.0.0.2", 00:16:41.736 "adrfam": "ipv4", 00:16:41.736 "trsvcid": "4420", 00:16:41.736 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:41.736 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:41.736 "hdgst": false, 00:16:41.736 "ddgst": false 00:16:41.736 }, 00:16:41.736 "method": "bdev_nvme_attach_controller" 00:16:41.736 }' 00:16:41.736 [2024-07-25 01:17:04.228112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.736 [2024-07-25 01:17:04.228124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.997 [2024-07-25 01:17:04.236134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.997 [2024-07-25 01:17:04.236144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.997 [2024-07-25 01:17:04.244156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.997 [2024-07-25 01:17:04.244165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.997 [2024-07-25 01:17:04.252176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.997 [2024-07-25 01:17:04.252186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.997 [2024-07-25 01:17:04.253726] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:16:41.997 [2024-07-25 01:17:04.253768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid874415 ] 00:16:41.997 [2024-07-25 01:17:04.260198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.997 [2024-07-25 01:17:04.260213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.997 [2024-07-25 01:17:04.268220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.997 [2024-07-25 01:17:04.268230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.997 [2024-07-25 01:17:04.276242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.997 [2024-07-25 01:17:04.276252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.997 EAL: No free 2048 kB hugepages reported on node 1 00:16:41.997 [2024-07-25 01:17:04.284265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.997 [2024-07-25 01:17:04.284274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.997 [2024-07-25 01:17:04.292288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.997 [2024-07-25 01:17:04.292297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.997 [2024-07-25 01:17:04.300309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.997 [2024-07-25 01:17:04.300318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.997 [2024-07-25 01:17:04.307960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.997 [2024-07-25 01:17:04.308342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.997 [2024-07-25 01:17:04.308353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.997 [2024-07-25 01:17:04.316365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.997 [2024-07-25 01:17:04.316378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.997 [2024-07-25 01:17:04.324375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.997 [2024-07-25 01:17:04.324385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.997 [2024-07-25 01:17:04.332396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.997 [2024-07-25 01:17:04.332406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.997 [2024-07-25 01:17:04.340417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.997 [2024-07-25 01:17:04.340428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.997 [2024-07-25 01:17:04.348444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.997 [2024-07-25 01:17:04.348463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.997 [2024-07-25 01:17:04.356462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.997 [2024-07-25 01:17:04.356473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.997 [2024-07-25 01:17:04.364483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.997 [2024-07-25 01:17:04.364493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.997 [2024-07-25 01:17:04.372507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.997 [2024-07-25 01:17:04.372517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.997 [2024-07-25 01:17:04.380527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.997 [2024-07-25 01:17:04.380536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.997 [2024-07-25 01:17:04.383220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.997 [2024-07-25 01:17:04.388550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.997 [2024-07-25 01:17:04.388563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.997 [2024-07-25 01:17:04.396577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.997 [2024-07-25 01:17:04.396594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.997 [2024-07-25 01:17:04.404600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.997 [2024-07-25 01:17:04.404621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.997 [2024-07-25 01:17:04.412615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.997 [2024-07-25 01:17:04.412629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.997 [2024-07-25 01:17:04.420635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.997 [2024-07-25 01:17:04.420646] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.997 [2024-07-25 01:17:04.428655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.997 [2024-07-25 01:17:04.428667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.997 [2024-07-25 01:17:04.436676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.997 [2024-07-25 01:17:04.436688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.997 [2024-07-25 01:17:04.444698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.997 [2024-07-25 01:17:04.444711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.997 [2024-07-25 01:17:04.452718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.997 [2024-07-25 01:17:04.452728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.997 [2024-07-25 01:17:04.460737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.997 [2024-07-25 01:17:04.460747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.997 [2024-07-25 01:17:04.468760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.997 [2024-07-25 01:17:04.468770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.997 [2024-07-25 01:17:04.476803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.997 [2024-07-25 01:17:04.476823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.997 [2024-07-25 01:17:04.484809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.997 [2024-07-25 01:17:04.484823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.257 [2024-07-25 01:17:04.492831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.257 [2024-07-25 01:17:04.492844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.257 [2024-07-25 01:17:04.500853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.257 [2024-07-25 01:17:04.500866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.257 [2024-07-25 01:17:04.508872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.257 [2024-07-25 01:17:04.508882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.257 [2024-07-25 01:17:04.516892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.257 [2024-07-25 01:17:04.516901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.257 [2024-07-25 01:17:04.524914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.257 [2024-07-25 01:17:04.524923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.257 [2024-07-25 01:17:04.532936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.257 [2024-07-25 01:17:04.532945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.257 [2024-07-25 01:17:04.540958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.257 [2024-07-25 01:17:04.540970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.257 [2024-07-25 01:17:04.548982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.257 [2024-07-25 01:17:04.548996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.257 [2024-07-25 01:17:04.557005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.257 [2024-07-25 01:17:04.557020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.257 [2024-07-25 01:17:04.565023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.257 [2024-07-25 01:17:04.565035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.257 [2024-07-25 01:17:04.573077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.257 [2024-07-25 01:17:04.573091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.257 [2024-07-25 01:17:04.581073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.257 [2024-07-25 01:17:04.581088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.257 Running I/O for 5 seconds... 00:16:42.258 [2024-07-25 01:17:04.589090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.258 [2024-07-25 01:17:04.589100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.258 [2024-07-25 01:17:04.616897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.258 [2024-07-25 01:17:04.616917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.258 [2024-07-25 01:17:04.632036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.258 [2024-07-25 01:17:04.632060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.258 [2024-07-25 01:17:04.642602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.258 [2024-07-25 01:17:04.642621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.258 [2024-07-25 01:17:04.651675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.258 [2024-07-25 01:17:04.651693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.258 [2024-07-25 01:17:04.659306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.258 [2024-07-25 01:17:04.659326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.258 [2024-07-25 01:17:04.669028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.258 [2024-07-25 01:17:04.669058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.258 [2024-07-25 01:17:04.675980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.258 [2024-07-25 01:17:04.675999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.258 [2024-07-25 01:17:04.685754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.258 [2024-07-25 01:17:04.685772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.258 [2024-07-25 01:17:04.696909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.258 [2024-07-25 01:17:04.696927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.258 [2024-07-25 01:17:04.707738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.258 [2024-07-25 01:17:04.707756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.258 [2024-07-25 01:17:04.719104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.258 [2024-07-25 01:17:04.719122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.258 [2024-07-25 01:17:04.731075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.258 [2024-07-25 01:17:04.731092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.258 [2024-07-25 01:17:04.739966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.258 [2024-07-25 01:17:04.739984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.258 [2024-07-25 01:17:04.748755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.258 [2024-07-25 01:17:04.748773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.518 [2024-07-25 01:17:04.756458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.518 [2024-07-25 01:17:04.756476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.518 [2024-07-25 01:17:04.763975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.518 [2024-07-25 01:17:04.763993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.518 [2024-07-25 01:17:04.775503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.518 [2024-07-25 01:17:04.775521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.518 [2024-07-25 01:17:04.784367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.518 [2024-07-25 01:17:04.784386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.518 [2024-07-25 01:17:04.793696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.518 [2024-07-25 01:17:04.793714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.518 [2024-07-25 01:17:04.802227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.518 [2024-07-25 01:17:04.802245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.518 [2024-07-25 01:17:04.809654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.518 [2024-07-25 01:17:04.809672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.518 [2024-07-25 01:17:04.819575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.518 [2024-07-25 01:17:04.819596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.518 [2024-07-25 01:17:04.828138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.518 [2024-07-25 01:17:04.828157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.518 [2024-07-25 01:17:04.837597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.518 [2024-07-25 01:17:04.837616] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.518 [2024-07-25 01:17:04.847049] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.518 [2024-07-25 01:17:04.847067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.518 [2024-07-25 01:17:04.859147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.518 [2024-07-25 01:17:04.859165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.518 [2024-07-25 01:17:04.870341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.518 [2024-07-25 01:17:04.870359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.518 [2024-07-25 01:17:04.882094] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.519 [2024-07-25 01:17:04.882112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.519 [2024-07-25 01:17:04.891629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.519 [2024-07-25 01:17:04.891648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.519 [2024-07-25 01:17:04.899289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.519 [2024-07-25 01:17:04.899307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.519 [2024-07-25 01:17:04.909099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.519 [2024-07-25 01:17:04.909119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.519 [2024-07-25 01:17:04.917563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.519 [2024-07-25 01:17:04.917582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.519 [2024-07-25 01:17:04.926402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.519 [2024-07-25 01:17:04.926422] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.519 [2024-07-25 01:17:04.935486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.519 [2024-07-25 01:17:04.935506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.519 [2024-07-25 01:17:04.942545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.519 [2024-07-25 01:17:04.942564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.519 [2024-07-25 01:17:04.953282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.519 [2024-07-25 01:17:04.953302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.519 [2024-07-25 01:17:04.962219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.519 [2024-07-25 01:17:04.962238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.519 [2024-07-25 01:17:04.970619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.519 [2024-07-25 01:17:04.970638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.519 [2024-07-25 01:17:04.979294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.519 [2024-07-25 01:17:04.979313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.519 [2024-07-25 01:17:04.987797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.519 [2024-07-25 01:17:04.987816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.519 [2024-07-25 01:17:04.996277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.519 [2024-07-25 01:17:04.996296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.519 [2024-07-25 01:17:05.005416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.519 [2024-07-25 01:17:05.005435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.779 [2024-07-25 01:17:05.012373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.779 [2024-07-25 01:17:05.012392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.779 [2024-07-25 01:17:05.023163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.779 [2024-07-25 01:17:05.023183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.779 [2024-07-25 01:17:05.031580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.779 [2024-07-25 01:17:05.031598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.779 [2024-07-25 01:17:05.040721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.779 [2024-07-25 01:17:05.040740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.779 [2024-07-25 01:17:05.047370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.779 [2024-07-25 01:17:05.047388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.779 [2024-07-25 01:17:05.058207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.779 [2024-07-25 01:17:05.058226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.779 [2024-07-25 01:17:05.066490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.779 [2024-07-25 01:17:05.066509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.779 [2024-07-25 01:17:05.075491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.779 [2024-07-25 01:17:05.075510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.779 [2024-07-25 01:17:05.082862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.779 [2024-07-25 01:17:05.082881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.779 [2024-07-25 01:17:05.092678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.779 [2024-07-25 01:17:05.092700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.779 [2024-07-25 01:17:05.101846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.779 [2024-07-25 01:17:05.101865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.779 [2024-07-25 01:17:05.110352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.779 [2024-07-25 01:17:05.110371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.779 [2024-07-25 01:17:05.119456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.780 [2024-07-25 01:17:05.119475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.780 [2024-07-25 01:17:05.128521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.780 [2024-07-25 01:17:05.128540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.780 [2024-07-25 01:17:05.137024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.780 [2024-07-25 01:17:05.137049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.780 [2024-07-25 01:17:05.146171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.780 [2024-07-25 01:17:05.146190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.780 [2024-07-25 01:17:05.155095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.780 [2024-07-25 01:17:05.155113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.780 [2024-07-25 01:17:05.161884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.780 [2024-07-25 01:17:05.161903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.780 [2024-07-25 01:17:05.172892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.780 [2024-07-25 01:17:05.172911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.780 [2024-07-25 01:17:05.179603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.780 [2024-07-25 01:17:05.179621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.780 [2024-07-25 01:17:05.189438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.780 [2024-07-25 01:17:05.189458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.780 [2024-07-25 01:17:05.196130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.780 [2024-07-25 01:17:05.196149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.780 [2024-07-25 01:17:05.206778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.780 [2024-07-25 01:17:05.206797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.780 [2024-07-25 01:17:05.213907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.780 [2024-07-25 01:17:05.213926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.780 [2024-07-25 01:17:05.224065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.780 [2024-07-25 01:17:05.224084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.780 [2024-07-25 01:17:05.232505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.780 [2024-07-25 01:17:05.232524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.780 [2024-07-25 01:17:05.240824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.780 [2024-07-25 01:17:05.240842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.780 [2024-07-25 01:17:05.247794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.780 [2024-07-25 01:17:05.247812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.780 [2024-07-25 01:17:05.258533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.780 [2024-07-25 01:17:05.258556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.780 [2024-07-25 01:17:05.267359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.780 [2024-07-25 01:17:05.267378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.041 [2024-07-25 01:17:05.275636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.041 [2024-07-25 01:17:05.275654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.041 [2024-07-25 01:17:05.282388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.041 [2024-07-25 01:17:05.282406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.041 [2024-07-25 01:17:05.292422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.041 [2024-07-25 01:17:05.292441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.041 [2024-07-25 01:17:05.300824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.041 [2024-07-25 01:17:05.300842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.041 [2024-07-25 01:17:05.309889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.041 [2024-07-25 01:17:05.309907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.041 [2024-07-25 01:17:05.318854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.041 [2024-07-25 01:17:05.318872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.041 [2024-07-25 01:17:05.327224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.041 [2024-07-25 01:17:05.327241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.041 [2024-07-25 01:17:05.336464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.041 [2024-07-25 01:17:05.336482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.041 [2024-07-25 01:17:05.345216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.041 [2024-07-25 01:17:05.345235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.041 [2024-07-25 01:17:05.353962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.041 [2024-07-25 01:17:05.353980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.041 [2024-07-25 01:17:05.360721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.041 [2024-07-25 01:17:05.360739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.041 [2024-07-25 01:17:05.371137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.041 [2024-07-25 01:17:05.371155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.041 [2024-07-25 01:17:05.377773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.041 [2024-07-25 01:17:05.377791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.041 [2024-07-25 01:17:05.388963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.041 [2024-07-25 01:17:05.388981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.041 [2024-07-25 01:17:05.398302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.041 [2024-07-25 01:17:05.398320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.041 [2024-07-25 01:17:05.406610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.041 [2024-07-25 01:17:05.406628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.041 [2024-07-25 01:17:05.415729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.041 [2024-07-25 01:17:05.415748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.041 [2024-07-25 01:17:05.424178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.041 [2024-07-25 01:17:05.424199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.041 [2024-07-25 01:17:05.434079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.041 [2024-07-25 01:17:05.434097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.041 [2024-07-25 01:17:05.443346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.041 [2024-07-25 01:17:05.443365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.041 [2024-07-25 01:17:05.451726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.041 [2024-07-25 01:17:05.451744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.041 [2024-07-25 01:17:05.460807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.041 [2024-07-25 01:17:05.460827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.041 [2024-07-25 01:17:05.469409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.041 [2024-07-25 01:17:05.469428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.041 [2024-07-25 01:17:05.477947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.041 [2024-07-25 01:17:05.477965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.041 [2024-07-25 01:17:05.487094] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.041 [2024-07-25 01:17:05.487112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.041 [2024-07-25 01:17:05.495449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.041 [2024-07-25 01:17:05.495467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.041 [2024-07-25 01:17:05.504715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.041 [2024-07-25 01:17:05.504734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.041 [2024-07-25 01:17:05.513202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.041 [2024-07-25 01:17:05.513220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.041 [2024-07-25 01:17:05.520248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.041 [2024-07-25 01:17:05.520266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.041 [2024-07-25 01:17:05.531106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.041 [2024-07-25 01:17:05.531124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.302 [2024-07-25 01:17:05.539662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.302 [2024-07-25 01:17:05.539680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.302 [2024-07-25 01:17:05.547970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.302 [2024-07-25 01:17:05.547989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.302 [2024-07-25 01:17:05.554883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.302 [2024-07-25 01:17:05.554901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.302 [2024-07-25 01:17:05.564814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.302 [2024-07-25 01:17:05.564833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.302 [2024-07-25 01:17:05.573817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.302 [2024-07-25 01:17:05.573835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.302 [2024-07-25 01:17:05.580613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.302 [2024-07-25 01:17:05.580630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.302 [2024-07-25 01:17:05.591292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.302 [2024-07-25 01:17:05.591314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.302 [2024-07-25 01:17:05.599776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.302 [2024-07-25 01:17:05.599794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.302 [2024-07-25 01:17:05.608153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.302 [2024-07-25 01:17:05.608171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.302 [2024-07-25 01:17:05.615408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.302 [2024-07-25 01:17:05.615425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.302 [2024-07-25 01:17:05.625260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.303 [2024-07-25 01:17:05.625278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.303 [2024-07-25 01:17:05.633790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.303 [2024-07-25 01:17:05.633808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.303 [2024-07-25 01:17:05.642560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.303 [2024-07-25 01:17:05.642578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.303 [2024-07-25 01:17:05.651392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.303 [2024-07-25 01:17:05.651410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.303 [2024-07-25 01:17:05.660432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.303 [2024-07-25 01:17:05.660451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.303 [2024-07-25 01:17:05.669565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.303 [2024-07-25 01:17:05.669584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.303 [2024-07-25 01:17:05.677873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.303 [2024-07-25 01:17:05.677891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.303 [2024-07-25 01:17:05.686382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.303 [2024-07-25 01:17:05.686400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.303 [2024-07-25 01:17:05.694744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.303 [2024-07-25 01:17:05.694763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.303 [2024-07-25 01:17:05.703216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.303 [2024-07-25 01:17:05.703235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.303 [2024-07-25 01:17:05.711673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.303 [2024-07-25 01:17:05.711692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.303 [2024-07-25 01:17:05.718357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.303 [2024-07-25 01:17:05.718375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.303 [2024-07-25 01:17:05.729352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.303 [2024-07-25 01:17:05.729370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.303 [2024-07-25 01:17:05.738509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.303 [2024-07-25 01:17:05.738527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.303 [2024-07-25 01:17:05.747535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.303 [2024-07-25 01:17:05.747553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.303 [2024-07-25 01:17:05.756095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.303 [2024-07-25 01:17:05.756113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.303 [2024-07-25 01:17:05.764857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.303 [2024-07-25 01:17:05.764875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.303 [2024-07-25 01:17:05.773354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.303 [2024-07-25 01:17:05.773373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.303 [2024-07-25 01:17:05.782346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.303 [2024-07-25 01:17:05.782364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.303 [2024-07-25 01:17:05.789487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.303 [2024-07-25 01:17:05.789506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.563 [2024-07-25 01:17:05.801257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.563 [2024-07-25 01:17:05.801275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.563 [2024-07-25 01:17:05.812013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.563 [2024-07-25 01:17:05.812031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.563 [2024-07-25 01:17:05.819896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.563 [2024-07-25 01:17:05.819914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.563 [2024-07-25 01:17:05.829609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.563 [2024-07-25 01:17:05.829627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.563 [2024-07-25 01:17:05.838682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.563 [2024-07-25 01:17:05.838700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.563 [2024-07-25 01:17:05.845608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.563 [2024-07-25 01:17:05.845626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.563 [2024-07-25 01:17:05.857367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.563 [2024-07-25 01:17:05.857393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.563 [2024-07-25 01:17:05.866222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.563 [2024-07-25 01:17:05.866243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.563 [2024-07-25 01:17:05.874208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.563 [2024-07-25 01:17:05.874226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.563 [2024-07-25 01:17:05.883525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.563 [2024-07-25 01:17:05.883543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.563 [2024-07-25 01:17:05.895768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.563 [2024-07-25 01:17:05.895786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.564 [2024-07-25 01:17:05.906893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.564 [2024-07-25 01:17:05.906911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.564 [2024-07-25 01:17:05.917203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.564 [2024-07-25 01:17:05.917222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.564 [2024-07-25 01:17:05.927030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.564 [2024-07-25 01:17:05.927053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.564 [2024-07-25 01:17:05.933994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.564 [2024-07-25 01:17:05.934012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.564 [2024-07-25 01:17:05.944182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.564 [2024-07-25 01:17:05.944201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.564 [2024-07-25 01:17:05.953560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.564 [2024-07-25 01:17:05.953578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.564 [2024-07-25 01:17:05.962570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.564 [2024-07-25 01:17:05.962589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.564 [2024-07-25 01:17:05.976136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.564 [2024-07-25 01:17:05.976153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.564 [2024-07-25 01:17:05.985504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.564 [2024-07-25 01:17:05.985522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.564 [2024-07-25 01:17:05.995638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.564 [2024-07-25 01:17:05.995656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.564 [2024-07-25 01:17:06.005464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.564 [2024-07-25 01:17:06.005482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.564 [2024-07-25 01:17:06.013303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.564 [2024-07-25 01:17:06.013321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.564 [2024-07-25 01:17:06.026332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.564 [2024-07-25 01:17:06.026349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.564 [2024-07-25 01:17:06.035684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.564 [2024-07-25 01:17:06.035701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.564 [2024-07-25 01:17:06.043291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.564 [2024-07-25 01:17:06.043309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.564 [2024-07-25 01:17:06.054586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.564 [2024-07-25 01:17:06.054604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.824 [2024-07-25 01:17:06.065498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.824 [2024-07-25 01:17:06.065516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.824 [2024-07-25 01:17:06.075757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.824 [2024-07-25 01:17:06.075776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.824 [2024-07-25 01:17:06.082931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.824 [2024-07-25 01:17:06.082950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.824 [2024-07-25 01:17:06.092692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.824 [2024-07-25 01:17:06.092711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.824 [2024-07-25 01:17:06.103847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.824 [2024-07-25 01:17:06.103865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.824 [2024-07-25 01:17:06.113348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.824 [2024-07-25 01:17:06.113367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.824 [2024-07-25 01:17:06.121803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.824 [2024-07-25 01:17:06.121822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.824 [2024-07-25 01:17:06.130318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.824 [2024-07-25 01:17:06.130336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.824 [2024-07-25 01:17:06.139527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.824 [2024-07-25 01:17:06.139545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.824 [2024-07-25 01:17:06.148810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.824 [2024-07-25 01:17:06.148827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.824 [2024-07-25 01:17:06.158143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.824 [2024-07-25 01:17:06.158162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.824 [2024-07-25 01:17:06.167224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.824 [2024-07-25 01:17:06.167242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.824 [2024-07-25 01:17:06.174985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.824 [2024-07-25 01:17:06.175003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.824 [2024-07-25 01:17:06.186655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.824 [2024-07-25 01:17:06.186672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.824 [2024-07-25 01:17:06.197641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.824 [2024-07-25 01:17:06.197659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.824 [2024-07-25 01:17:06.211439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.824 [2024-07-25 01:17:06.211457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.824 [2024-07-25 01:17:06.221985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.824 [2024-07-25 01:17:06.222003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.824 [2024-07-25 01:17:06.230842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.825 [2024-07-25 01:17:06.230860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.825 [2024-07-25 01:17:06.239179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.825 [2024-07-25 01:17:06.239196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.825 [2024-07-25 01:17:06.249436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.825 [2024-07-25 01:17:06.249455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.825 [2024-07-25 01:17:06.256285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.825 [2024-07-25 01:17:06.256304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.825 [2024-07-25 01:17:06.266059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.825 [2024-07-25 01:17:06.266077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.825 [2024-07-25 01:17:06.277133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.825 [2024-07-25 01:17:06.277161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.825 [2024-07-25 01:17:06.289808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.825 [2024-07-25 01:17:06.289826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.825 [2024-07-25 01:17:06.298947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.825 [2024-07-25 01:17:06.298965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.825 [2024-07-25 01:17:06.307639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.825 [2024-07-25 01:17:06.307658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.825 [2024-07-25 01:17:06.315218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.825 [2024-07-25 01:17:06.315236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.085 [2024-07-25 01:17:06.325124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.085 [2024-07-25 01:17:06.325144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.085 [2024-07-25 01:17:06.334129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.085 [2024-07-25 01:17:06.334148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.085 [2024-07-25 01:17:06.342610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.085 [2024-07-25 01:17:06.342629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.085 [2024-07-25 01:17:06.349376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.085 [2024-07-25 01:17:06.349395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.085 [2024-07-25 01:17:06.360378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.085 [2024-07-25 01:17:06.360398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.085 [2024-07-25 01:17:06.367315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.085 [2024-07-25 01:17:06.367332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.085 [2024-07-25 01:17:06.378913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.085 [2024-07-25 01:17:06.378934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.085 [2024-07-25 01:17:06.387206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.085 [2024-07-25 01:17:06.387225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.085 [2024-07-25 01:17:06.394014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.085 [2024-07-25 01:17:06.394033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.085 [2024-07-25 01:17:06.404735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.085 [2024-07-25 01:17:06.404754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.085 [2024-07-25 01:17:06.415382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.085 [2024-07-25 01:17:06.415401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.085 [2024-07-25 01:17:06.426360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.085 [2024-07-25 01:17:06.426380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.085 [2024-07-25 01:17:06.435688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.085 [2024-07-25 01:17:06.435707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.085 [2024-07-25 01:17:06.445258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.085 [2024-07-25 01:17:06.445278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.085 [2024-07-25 01:17:06.453916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.085 [2024-07-25 01:17:06.453935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.085 [2024-07-25 01:17:06.460836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.085 [2024-07-25 01:17:06.460854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.085 [2024-07-25 01:17:06.471247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.085 [2024-07-25 01:17:06.471270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.086 [2024-07-25 01:17:06.479847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.086 [2024-07-25 01:17:06.479866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.086 [2024-07-25 01:17:06.489625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.086 [2024-07-25 01:17:06.489643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.086 [2024-07-25 01:17:06.499602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.086 [2024-07-25 01:17:06.499621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.086 [2024-07-25 01:17:06.510524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.086 [2024-07-25 01:17:06.510543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.086 [2024-07-25 01:17:06.518817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.086 [2024-07-25 01:17:06.518836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.086 [2024-07-25 01:17:06.530157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.086 [2024-07-25 01:17:06.530176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.086 [2024-07-25 01:17:06.538364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.086 [2024-07-25 01:17:06.538383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.086 [2024-07-25 01:17:06.547849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.086 [2024-07-25 01:17:06.547868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.086 [2024-07-25 01:17:06.558340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.086 [2024-07-25 01:17:06.558359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.086 [2024-07-25 01:17:06.567460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.086 [2024-07-25 01:17:06.567478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.086 [2024-07-25 01:17:06.576644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.086 [2024-07-25 01:17:06.576662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.346 [2024-07-25 01:17:06.583418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.346 [2024-07-25 01:17:06.583436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.346 [2024-07-25 01:17:06.595029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.346 [2024-07-25 01:17:06.595056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.346 [2024-07-25 01:17:06.603440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.346 [2024-07-25 01:17:06.603458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.346 [2024-07-25 01:17:06.612539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.346 [2024-07-25 01:17:06.612570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.346 [2024-07-25 01:17:06.621626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.346 [2024-07-25 01:17:06.621645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.346 [2024-07-25 01:17:06.628350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.346 [2024-07-25 01:17:06.628368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.346 [2024-07-25 01:17:06.638714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.346 [2024-07-25 01:17:06.638733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.346 [2024-07-25 01:17:06.647291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.346 [2024-07-25 01:17:06.647314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.346 [2024-07-25 01:17:06.655678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.346 [2024-07-25 01:17:06.655697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.346 [2024-07-25 01:17:06.662809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.346 [2024-07-25 01:17:06.662828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.346 [2024-07-25 01:17:06.673421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.346 [2024-07-25 01:17:06.673440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.346 [2024-07-25 01:17:06.682414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.346 [2024-07-25 01:17:06.682433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.346 [2024-07-25 01:17:06.690275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.346 [2024-07-25 01:17:06.690293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.346 [2024-07-25 01:17:06.699667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.346 [2024-07-25 01:17:06.699685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.347 [2024-07-25 01:17:06.706722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.347 [2024-07-25 01:17:06.706741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.347 [2024-07-25 01:17:06.717051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.347 [2024-07-25 01:17:06.717070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.347 [2024-07-25 01:17:06.726648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.347 [2024-07-25 01:17:06.726666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.347 [2024-07-25 01:17:06.738802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.347 [2024-07-25 01:17:06.738820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.347 [2024-07-25 01:17:06.748018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.347 [2024-07-25 01:17:06.748035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.347 [2024-07-25 01:17:06.755867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.347 [2024-07-25 01:17:06.755885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.347 [2024-07-25 01:17:06.765923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.347 [2024-07-25 01:17:06.765942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.347 [2024-07-25 01:17:06.772597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.347 [2024-07-25 01:17:06.772615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.347 [2024-07-25 01:17:06.783414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.347 [2024-07-25 01:17:06.783433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.347 [2024-07-25 01:17:06.790536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.347 [2024-07-25 01:17:06.790555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.347 [2024-07-25 01:17:06.800653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.347 [2024-07-25 01:17:06.800673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.347 [2024-07-25 01:17:06.809921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.347 [2024-07-25 01:17:06.809940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.347 [2024-07-25 01:17:06.817281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.347 [2024-07-25 01:17:06.817306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.347 [2024-07-25 01:17:06.827837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.347 [2024-07-25 01:17:06.827855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.347 [2024-07-25 01:17:06.836431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.347 [2024-07-25 01:17:06.836449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.607 [2024-07-25 01:17:06.845123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.607 [2024-07-25 01:17:06.845141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.607 [2024-07-25 01:17:06.853438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.607 [2024-07-25 01:17:06.853456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.607 [2024-07-25 01:17:06.862679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.607 [2024-07-25 01:17:06.862697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.607 [2024-07-25 01:17:06.871732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.607 [2024-07-25 01:17:06.871750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.607 [2024-07-25 01:17:06.920254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.607 [2024-07-25 01:17:06.920272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.607 [2024-07-25 01:17:06.928400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.607 [2024-07-25 01:17:06.928418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.607 [2024-07-25 01:17:06.936348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.607 [2024-07-25 01:17:06.936366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.607 [2024-07-25 01:17:06.945759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.607 [2024-07-25 01:17:06.945776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.607 [2024-07-25 01:17:06.954339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.607 [2024-07-25 01:17:06.954356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.608 [2024-07-25 01:17:06.961080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.608 [2024-07-25 01:17:06.961098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.608 [2024-07-25 01:17:06.972570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.608 [2024-07-25 01:17:06.972589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.608 [2024-07-25 01:17:06.981048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.608 [2024-07-25 01:17:06.981066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.608 [2024-07-25 01:17:06.990099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.608 [2024-07-25 01:17:06.990117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.608 [2024-07-25 01:17:06.999149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.608 [2024-07-25 01:17:06.999166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.608 [2024-07-25 01:17:07.008253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.608 [2024-07-25 01:17:07.008272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.608 [2024-07-25 01:17:07.014940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.608 [2024-07-25 01:17:07.014958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.608 [2024-07-25 01:17:07.025073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.608 [2024-07-25 01:17:07.025094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.608 [2024-07-25 01:17:07.033482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.608 [2024-07-25 01:17:07.033501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.608 [2024-07-25 01:17:07.041947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.608 [2024-07-25 01:17:07.041965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.608 [2024-07-25 01:17:07.050953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.608 [2024-07-25 01:17:07.050971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.608 [2024-07-25 01:17:07.059428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.608 [2024-07-25 01:17:07.059445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.608 [2024-07-25 01:17:07.068541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.608 [2024-07-25 01:17:07.068559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.608 [2024-07-25 01:17:07.077677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.608 [2024-07-25 01:17:07.077695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.608 [2024-07-25 01:17:07.085999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.608 [2024-07-25 01:17:07.086016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.608 [2024-07-25 01:17:07.095107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.608 [2024-07-25 01:17:07.095125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.868 [2024-07-25 01:17:07.104242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.868 [2024-07-25 01:17:07.104261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.868 [2024-07-25 01:17:07.113589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.868 [2024-07-25 01:17:07.113607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.868 [2024-07-25 01:17:07.122154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.868 [2024-07-25 01:17:07.122172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.868 [2024-07-25 01:17:07.130845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.868 [2024-07-25 01:17:07.130863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.868 [2024-07-25 01:17:07.139350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.868 [2024-07-25 01:17:07.139368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.868 [2024-07-25 01:17:07.147763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.868 [2024-07-25 01:17:07.147781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.868 [2024-07-25 01:17:07.156411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.868 [2024-07-25 01:17:07.156430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.868 [2024-07-25 01:17:07.165434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.868 [2024-07-25 01:17:07.165452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.869 [2024-07-25 01:17:07.173356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.869 [2024-07-25 01:17:07.173374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.869 [2024-07-25 01:17:07.184474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.869 [2024-07-25 01:17:07.184492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.869 [2024-07-25 01:17:07.191386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.869 [2024-07-25 01:17:07.191405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.869 [2024-07-25 01:17:07.199662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.869 [2024-07-25 01:17:07.199681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.869 [2024-07-25 01:17:07.207924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.869 [2024-07-25 01:17:07.207943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.869 [2024-07-25 01:17:07.217154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.869 [2024-07-25 01:17:07.217171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.869 [2024-07-25 01:17:07.226311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.869 [2024-07-25 01:17:07.226329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.869 [2024-07-25 01:17:07.235237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.869 [2024-07-25 01:17:07.235255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.869 [2024-07-25 01:17:07.242850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.869 [2024-07-25 01:17:07.242868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.869 [2024-07-25 01:17:07.252489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.869 [2024-07-25 01:17:07.252508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.869 [2024-07-25 01:17:07.261517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.869 [2024-07-25 01:17:07.261536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.869 [2024-07-25 01:17:07.270822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.869 [2024-07-25 01:17:07.270840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.869 [2024-07-25 01:17:07.279662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.869 [2024-07-25 01:17:07.279680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.869 [2024-07-25 01:17:07.288083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.869 [2024-07-25 01:17:07.288101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.869 [2024-07-25 01:17:07.296535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.869 [2024-07-25 01:17:07.296553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.869 [2024-07-25 01:17:07.304916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.869 [2024-07-25 01:17:07.304934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.869 [2024-07-25 01:17:07.314014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.869 [2024-07-25 01:17:07.314031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.869 [2024-07-25 01:17:07.323139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.869 [2024-07-25 01:17:07.323157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.869 [2024-07-25 01:17:07.331746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.869 [2024-07-25 01:17:07.331763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.869 [2024-07-25 01:17:07.338492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.869 [2024-07-25 01:17:07.338510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.869 [2024-07-25 01:17:07.349318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.869 [2024-07-25 01:17:07.349337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.869 [2024-07-25 01:17:07.355960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.869 [2024-07-25 01:17:07.355978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.129 [2024-07-25 01:17:07.367047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.129 [2024-07-25 01:17:07.367065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.129 [2024-07-25 01:17:07.375836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.130 [2024-07-25 01:17:07.375854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.130 [2024-07-25 01:17:07.384357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.130 [2024-07-25 01:17:07.384375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.130 [2024-07-25 01:17:07.393401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.130 [2024-07-25 01:17:07.393419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.130 [2024-07-25 01:17:07.402377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.130 [2024-07-25 01:17:07.402395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.130 [2024-07-25 01:17:07.411224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.130 [2024-07-25 01:17:07.411242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.130 [2024-07-25 01:17:07.420448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.130 [2024-07-25 01:17:07.420466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.130 [2024-07-25 01:17:07.427723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.130 [2024-07-25 01:17:07.427741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.130 [2024-07-25 01:17:07.437290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.130 [2024-07-25 01:17:07.437308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.130 [2024-07-25 01:17:07.445709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.130 [2024-07-25 01:17:07.445727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.130 [2024-07-25 01:17:07.454200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.130 [2024-07-25 01:17:07.454218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.130 [2024-07-25 01:17:07.461872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.130 [2024-07-25 01:17:07.461890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.130 [2024-07-25 01:17:07.472861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.130 [2024-07-25 01:17:07.472879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.130 [2024-07-25 01:17:07.481574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.130 [2024-07-25 01:17:07.481593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.130 [2024-07-25 01:17:07.492307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.130 [2024-07-25 01:17:07.492325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.130 [2024-07-25 01:17:07.504221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.130 [2024-07-25 01:17:07.504241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.130 [2024-07-25 01:17:07.513019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.130 [2024-07-25 01:17:07.513038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.130 [2024-07-25 01:17:07.526894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.130 [2024-07-25 01:17:07.526913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.130 [2024-07-25 01:17:07.538602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.130 [2024-07-25 01:17:07.538621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.130 [2024-07-25 01:17:07.547431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.130 [2024-07-25 01:17:07.547448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.130 [2024-07-25 01:17:07.554930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.130 [2024-07-25 01:17:07.554948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.130 [2024-07-25 01:17:07.564103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.130 [2024-07-25 01:17:07.564121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.130 [2024-07-25 01:17:07.571263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.130 [2024-07-25 01:17:07.571281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.130 [2024-07-25 01:17:07.582776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.130 [2024-07-25 01:17:07.582794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.130 [2024-07-25 01:17:07.594591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.130 [2024-07-25 01:17:07.594610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.130 [2024-07-25 01:17:07.604166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.130 [2024-07-25 01:17:07.604184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.130 [2024-07-25 01:17:07.613358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.130 [2024-07-25 01:17:07.613376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.130 [2024-07-25 01:17:07.620585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.130 [2024-07-25 01:17:07.620603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.390 [2024-07-25 01:17:07.630174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.390 [2024-07-25 01:17:07.630193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.390 [2024-07-25 01:17:07.678700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.391 [2024-07-25 01:17:07.678719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.391 [2024-07-25 01:17:07.696508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.391 [2024-07-25 01:17:07.696528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.391 [2024-07-25 01:17:07.707136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.391 [2024-07-25 01:17:07.707156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.391 [2024-07-25 01:17:07.716018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.391 [2024-07-25 01:17:07.716037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.391 [2024-07-25 01:17:07.727867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.391 [2024-07-25 01:17:07.727887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.391 [2024-07-25 01:17:07.738171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.391 [2024-07-25 01:17:07.738190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.391 [2024-07-25 01:17:07.747170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.391 [2024-07-25 01:17:07.747189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.391 [2024-07-25 01:17:07.755166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.391 [2024-07-25 01:17:07.755186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.391 [2024-07-25 01:17:07.765235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.391 [2024-07-25 01:17:07.765254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.391 [2024-07-25 01:17:07.773771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.391 [2024-07-25 01:17:07.773790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.391 [2024-07-25 01:17:07.781494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.391 [2024-07-25 01:17:07.781512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.391 [2024-07-25 01:17:07.792820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.391 [2024-07-25 01:17:07.792839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.391 [2024-07-25 01:17:07.803485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.391 [2024-07-25 01:17:07.803504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.391 [2024-07-25 01:17:07.810233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.391 [2024-07-25 01:17:07.810252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.391 [2024-07-25 01:17:07.821008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.391 [2024-07-25 01:17:07.821028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.391 [2024-07-25 01:17:07.829431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.391 [2024-07-25 01:17:07.829450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.391 [2024-07-25 01:17:07.838495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.391 [2024-07-25 01:17:07.838515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.391 [2024-07-25 01:17:07.846839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.391 [2024-07-25 01:17:07.846859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.391 [2024-07-25 01:17:07.856126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.391 [2024-07-25 01:17:07.856144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.391 [2024-07-25 01:17:07.863083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.391 [2024-07-25 01:17:07.863102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.391 [2024-07-25 01:17:07.873824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.391 [2024-07-25 01:17:07.873842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.391 [2024-07-25 01:17:07.880812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.391 [2024-07-25 01:17:07.880830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.650 [2024-07-25 01:17:07.891692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.650 [2024-07-25 01:17:07.891711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.650 [2024-07-25 01:17:07.898634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.650 [2024-07-25 01:17:07.898652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.650 [2024-07-25 01:17:07.908773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.650 [2024-07-25 01:17:07.908791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.650 [2024-07-25 01:17:07.917189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.650 [2024-07-25 01:17:07.917208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.650 [2024-07-25 01:17:07.929289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.650 [2024-07-25 01:17:07.929311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.650 [2024-07-25 01:17:07.939069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.650 [2024-07-25 01:17:07.939088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.650 [2024-07-25 01:17:07.946675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.650 [2024-07-25 01:17:07.946694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.650 [2024-07-25 01:17:07.957435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.650 [2024-07-25 01:17:07.957454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.650 [2024-07-25 01:17:07.965466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.650 [2024-07-25 01:17:07.965485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.650 [2024-07-25 01:17:07.975316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.650 [2024-07-25 01:17:07.975335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.650 [2024-07-25 01:17:07.983582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.650 [2024-07-25 01:17:07.983601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.650 [2024-07-25 01:17:07.994540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.650 [2024-07-25 01:17:07.994558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.650 [2024-07-25 01:17:08.005821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.650 [2024-07-25 01:17:08.005840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.650 [2024-07-25 01:17:08.012381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.650 [2024-07-25 01:17:08.012400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.650 [2024-07-25 01:17:08.021987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.650 [2024-07-25 01:17:08.022006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.650 [2024-07-25 01:17:08.028818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.650 [2024-07-25 01:17:08.028837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.650 [2024-07-25 01:17:08.038839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.650 [2024-07-25 01:17:08.038858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.650 [2024-07-25 01:17:08.047455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.650 [2024-07-25 01:17:08.047474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.650 [2024-07-25 01:17:08.058290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.650 [2024-07-25 01:17:08.058308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.650 [2024-07-25 01:17:08.068184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.650 [2024-07-25 01:17:08.068202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.650 [2024-07-25 01:17:08.075778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.650 [2024-07-25 01:17:08.075796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.650 [2024-07-25 01:17:08.086649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.650 [2024-07-25 01:17:08.086667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.650 [2024-07-25 01:17:08.098242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.650 [2024-07-25 01:17:08.098260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.650 [2024-07-25 01:17:08.106892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.650 [2024-07-25 01:17:08.106913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.650 [2024-07-25 01:17:08.115396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.650 [2024-07-25 01:17:08.115415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.650 [2024-07-25 01:17:08.124692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.650 [2024-07-25 01:17:08.124710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.650 [2024-07-25 01:17:08.134359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.650 [2024-07-25 01:17:08.134378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.909 [2024-07-25 01:17:08.145570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.909 [2024-07-25 01:17:08.145587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.909 [2024-07-25 01:17:08.153720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.909 [2024-07-25 01:17:08.153738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.909 [2024-07-25 01:17:08.163450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.909 [2024-07-25 01:17:08.163467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.909 [2024-07-25 01:17:08.172394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.909 [2024-07-25 01:17:08.172412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.909 [2024-07-25 01:17:08.184434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.909 [2024-07-25 01:17:08.184451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.909 [2024-07-25 01:17:08.193696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.909 [2024-07-25 01:17:08.193714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.909 [2024-07-25 01:17:08.202594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.909 [2024-07-25 01:17:08.202613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.909 [2024-07-25 01:17:08.211612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.909 [2024-07-25 01:17:08.211630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.909 [2024-07-25 01:17:08.222336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.909 [2024-07-25 01:17:08.222354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.909 [2024-07-25 01:17:08.231869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.909 [2024-07-25 01:17:08.231888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.909 [2024-07-25 01:17:08.239939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.909 [2024-07-25 01:17:08.239956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.909 [2024-07-25 01:17:08.252688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.909 [2024-07-25 01:17:08.252706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.909 [2024-07-25 01:17:08.261533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.909 [2024-07-25 01:17:08.261551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.909 [2024-07-25 01:17:08.268527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.909 [2024-07-25 01:17:08.268544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.909 [2024-07-25 01:17:08.276179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.909 [2024-07-25 01:17:08.276196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.909 [2024-07-25 01:17:08.286345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.909 [2024-07-25 01:17:08.286367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.909 [2024-07-25 01:17:08.293696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.909 [2024-07-25 01:17:08.293714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.909 [2024-07-25 01:17:08.304427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.909 [2024-07-25 01:17:08.304445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.909 [2024-07-25 01:17:08.312294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.910 [2024-07-25 01:17:08.312312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.910 [2024-07-25 01:17:08.321295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.910 [2024-07-25 01:17:08.321313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.910 [2024-07-25 01:17:08.328111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.910 [2024-07-25 01:17:08.328129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.910 [2024-07-25 01:17:08.337915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.910 [2024-07-25 01:17:08.337933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.910 [2024-07-25 01:17:08.344990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.910 [2024-07-25 01:17:08.345009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.910 [2024-07-25 01:17:08.353611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.910 [2024-07-25 01:17:08.353628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.910 [2024-07-25 01:17:08.365505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.910 [2024-07-25 01:17:08.365522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.910 [2024-07-25 01:17:08.373522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.910 [2024-07-25 01:17:08.373539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.910 [2024-07-25 01:17:08.385983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.910 [2024-07-25 01:17:08.386000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.910 [2024-07-25 01:17:08.400231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.910 [2024-07-25 01:17:08.400250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.170 [2024-07-25 01:17:08.409056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.170 [2024-07-25 01:17:08.409090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.170 [2024-07-25 01:17:08.415801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.170 [2024-07-25 01:17:08.415819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.170 [2024-07-25 01:17:08.426440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.170 [2024-07-25 01:17:08.426459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.170 [2024-07-25 01:17:08.433521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.170 [2024-07-25 01:17:08.433539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.170 [2024-07-25 01:17:08.443420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.170 [2024-07-25 01:17:08.443439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.170 [2024-07-25 01:17:08.452066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.170 [2024-07-25 01:17:08.452084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.170 [2024-07-25 01:17:08.458796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.170 [2024-07-25 01:17:08.458818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.170 [2024-07-25 01:17:08.468950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.170 [2024-07-25 01:17:08.468969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.170 [2024-07-25 01:17:08.477849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.170 [2024-07-25 01:17:08.477869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.170 [2024-07-25 01:17:08.486253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.170 [2024-07-25 01:17:08.486272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.170 [2024-07-25 01:17:08.495433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.170 [2024-07-25 01:17:08.495451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.170 [2024-07-25 01:17:08.503830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.170 [2024-07-25 01:17:08.503848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.170 [2024-07-25 01:17:08.512350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.170 [2024-07-25 01:17:08.512369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.170 [2024-07-25 01:17:08.521393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.170 [2024-07-25 01:17:08.521412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.170 [2024-07-25 01:17:08.529821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.170 [2024-07-25 01:17:08.529839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.170 [2024-07-25 01:17:08.538389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.170 [2024-07-25 01:17:08.538407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.170 [2024-07-25 01:17:08.547508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.170 [2024-07-25 01:17:08.547526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.170 [2024-07-25 01:17:08.556460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.170 [2024-07-25 01:17:08.556479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.170 [2024-07-25 01:17:08.565518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.170 [2024-07-25 01:17:08.565535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.170 [2024-07-25 01:17:08.574131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.170 [2024-07-25 01:17:08.574149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.170 [2024-07-25 01:17:08.583196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.170 [2024-07-25 01:17:08.583214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.170 [2024-07-25 01:17:08.591627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.170 [2024-07-25 01:17:08.591645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.170 [2024-07-25 01:17:08.600065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.170 [2024-07-25 01:17:08.600084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.170 [2024-07-25 01:17:08.608435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.170 [2024-07-25 01:17:08.608453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.170 [2024-07-25 01:17:08.616917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.170 [2024-07-25 01:17:08.616935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.170 [2024-07-25 01:17:08.625908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.170 [2024-07-25 01:17:08.625927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.170 [2024-07-25 01:17:08.635302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.170 [2024-07-25 01:17:08.635321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.170 [2024-07-25 01:17:08.642126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.170 [2024-07-25 01:17:08.642144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.170 [2024-07-25 01:17:08.653725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.170 [2024-07-25 01:17:08.653743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.170 [2024-07-25 01:17:08.660730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.170 [2024-07-25 01:17:08.660748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.430 [2024-07-25 01:17:08.669079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.430 [2024-07-25 01:17:08.669097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.430 [2024-07-25 01:17:08.677799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.430 [2024-07-25 01:17:08.677817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.430 [2024-07-25 01:17:08.689652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.430 [2024-07-25 01:17:08.689671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.430 [2024-07-25 01:17:08.699700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.430 [2024-07-25 01:17:08.699719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.430 [2024-07-25 01:17:08.710183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.430 [2024-07-25 01:17:08.710202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.430 [2024-07-25 01:17:08.718717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.430 [2024-07-25 01:17:08.718736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.430 [2024-07-25 01:17:08.727163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.430 [2024-07-25 01:17:08.727181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.430 [2024-07-25 01:17:08.734429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.430 [2024-07-25 01:17:08.734447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.430 [2024-07-25 01:17:08.744422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.431 [2024-07-25 01:17:08.744440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.431 [2024-07-25 01:17:08.753586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.431 [2024-07-25 01:17:08.753604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.431 [2024-07-25 01:17:08.760842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.431 [2024-07-25 01:17:08.760861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.431 [2024-07-25 01:17:08.772157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.431 [2024-07-25 01:17:08.772175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.431 [2024-07-25 01:17:08.782542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.431 [2024-07-25 01:17:08.782561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.431 [2024-07-25 01:17:08.791084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.431 [2024-07-25 01:17:08.791102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.431 [2024-07-25 01:17:08.799564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.431 [2024-07-25 01:17:08.799583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.431 [2024-07-25 01:17:08.809642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.431 [2024-07-25 01:17:08.809661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.431 [2024-07-25 01:17:08.818514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.431 [2024-07-25 01:17:08.818532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.431 [2024-07-25 01:17:08.828119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.431 [2024-07-25 01:17:08.828137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.431 [2024-07-25 01:17:08.837861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.431 [2024-07-25 01:17:08.837879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.431 [2024-07-25 01:17:08.846916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.431 [2024-07-25 01:17:08.846934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.431 [2024-07-25 01:17:08.855418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.431 [2024-07-25 01:17:08.855436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.431 [2024-07-25 01:17:08.864507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.431 [2024-07-25 01:17:08.864525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.431 [2024-07-25 01:17:08.873470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.431 [2024-07-25 01:17:08.873488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.431 [2024-07-25 01:17:08.881886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.431 [2024-07-25 01:17:08.881903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.431 [2024-07-25 01:17:08.891109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.431 [2024-07-25 01:17:08.891126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.691 [2024-07-25 01:17:08.939646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.691 [2024-07-25 01:17:08.939664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.691 [2024-07-25 01:17:08.952067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.691 [2024-07-25 01:17:08.952085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.691 [2024-07-25 01:17:08.961758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.691 [2024-07-25 01:17:08.961775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.691 [2024-07-25 01:17:08.970556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.691 [2024-07-25 01:17:08.970575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.691 [2024-07-25 01:17:08.979920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.691 [2024-07-25 01:17:08.979939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.691 [2024-07-25 01:17:08.986854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.691 [2024-07-25 01:17:08.986872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.691 [2024-07-25 01:17:08.997807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.691 [2024-07-25 01:17:08.997825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.691 [2024-07-25 01:17:09.005612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.691 [2024-07-25 01:17:09.005630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.691 [2024-07-25 01:17:09.015562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.691 [2024-07-25 01:17:09.015580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.691 [2024-07-25 01:17:09.023266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.691 [2024-07-25 01:17:09.023284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.691 [2024-07-25 01:17:09.032454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.691 [2024-07-25 01:17:09.032472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.691 [2024-07-25 01:17:09.044812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.691 [2024-07-25 01:17:09.044830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.691 [2024-07-25 01:17:09.055776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.691 [2024-07-25 01:17:09.055794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.691 [2024-07-25 01:17:09.062525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.691 [2024-07-25 01:17:09.062542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.691 [2024-07-25 01:17:09.073091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.691 [2024-07-25 01:17:09.073110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.691 [2024-07-25 01:17:09.080093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.691 [2024-07-25 01:17:09.080112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.691 [2024-07-25 01:17:09.090288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.691 [2024-07-25 01:17:09.090307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.691 [2024-07-25 01:17:09.097286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.691 [2024-07-25 01:17:09.097305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.691 [2024-07-25 01:17:09.108354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.691 [2024-07-25 01:17:09.108375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.691 [2024-07-25 01:17:09.115321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.691 [2024-07-25 01:17:09.115351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.691 [2024-07-25 01:17:09.126729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.691 [2024-07-25 01:17:09.126750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.691 [2024-07-25 01:17:09.135436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.691 [2024-07-25 01:17:09.135455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.691 [2024-07-25 01:17:09.142455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.691 [2024-07-25 01:17:09.142473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.691 [2024-07-25 01:17:09.153686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.691 [2024-07-25 01:17:09.153704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.691 [2024-07-25 01:17:09.160413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.691 [2024-07-25 01:17:09.160432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.691 [2024-07-25 01:17:09.171743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.691 [2024-07-25 01:17:09.171762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.691 [2024-07-25 01:17:09.180928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.691 [2024-07-25 01:17:09.180947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.951 [2024-07-25 01:17:09.190080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.951 [2024-07-25 01:17:09.190099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.951 [2024-07-25 01:17:09.197720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.951 [2024-07-25 01:17:09.197739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.951 [2024-07-25 01:17:09.208053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.951 [2024-07-25 01:17:09.208072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.951 [2024-07-25 01:17:09.215584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.951 [2024-07-25 01:17:09.215604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.951 [2024-07-25 01:17:09.225839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.951 [2024-07-25 01:17:09.225857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.951 [2024-07-25 01:17:09.239884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.951 [2024-07-25 01:17:09.239904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.951 [2024-07-25 01:17:09.249875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.951 [2024-07-25 01:17:09.249895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.951 [2024-07-25 01:17:09.256645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.951 [2024-07-25 01:17:09.256665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.951 [2024-07-25 01:17:09.267321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.951 [2024-07-25 01:17:09.267340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.951 [2024-07-25 01:17:09.274489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.951 [2024-07-25 01:17:09.274507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.951 [2024-07-25 01:17:09.284544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.951 [2024-07-25 01:17:09.284564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.951 [2024-07-25 01:17:09.291882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.951 [2024-07-25 01:17:09.291901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.951 [2024-07-25 01:17:09.301973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.951 [2024-07-25 01:17:09.301992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.951 [2024-07-25 01:17:09.308990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.951 [2024-07-25 01:17:09.309009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.951 [2024-07-25 01:17:09.318997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.951 [2024-07-25 01:17:09.319017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.951 [2024-07-25 01:17:09.328267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.951 [2024-07-25 01:17:09.328286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.951 [2024-07-25 01:17:09.336300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.951 [2024-07-25 01:17:09.336325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.951 [2024-07-25 01:17:09.344001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.951 [2024-07-25 01:17:09.344019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.951 [2024-07-25 01:17:09.355258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.951 [2024-07-25 01:17:09.355281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.951 [2024-07-25 01:17:09.362642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.951 [2024-07-25 01:17:09.362661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.951 [2024-07-25 01:17:09.371178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.951 [2024-07-25 01:17:09.371197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.951 [2024-07-25 01:17:09.378618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.951 [2024-07-25 01:17:09.378637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.951 [2024-07-25 01:17:09.387501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.951 [2024-07-25 01:17:09.387520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.951 [2024-07-25 01:17:09.396540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.951 [2024-07-25 01:17:09.396559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.951 [2024-07-25 01:17:09.403409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.951 [2024-07-25 01:17:09.403428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.951 [2024-07-25 01:17:09.414102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.951 [2024-07-25 01:17:09.414121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.951 [2024-07-25 01:17:09.423288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.951 [2024-07-25 01:17:09.423307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.951 [2024-07-25 01:17:09.430010] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.951 [2024-07-25 01:17:09.430028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.951 [2024-07-25 01:17:09.440068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.951 [2024-07-25 01:17:09.440093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.212 [2024-07-25 01:17:09.448134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.212 [2024-07-25 01:17:09.448152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.212 [2024-07-25 01:17:09.458004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.212 [2024-07-25 01:17:09.458023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.212 [2024-07-25 01:17:09.464915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.212 [2024-07-25 01:17:09.464933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.212 [2024-07-25 01:17:09.475240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.212 [2024-07-25 01:17:09.475258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.212 [2024-07-25 01:17:09.484521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.212 [2024-07-25 01:17:09.484540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.212 [2024-07-25 01:17:09.491580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.212 [2024-07-25 01:17:09.491599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.212 [2024-07-25 01:17:09.501558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.212 [2024-07-25 01:17:09.501576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.212 [2024-07-25 01:17:09.508430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.212 [2024-07-25 01:17:09.508447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.212 [2024-07-25 01:17:09.518607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.212 [2024-07-25 01:17:09.518632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.212 [2024-07-25 01:17:09.527583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.212 [2024-07-25 01:17:09.527602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.212 [2024-07-25 01:17:09.534656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.212 [2024-07-25 01:17:09.534676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.212 [2024-07-25 01:17:09.544617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.212 [2024-07-25 01:17:09.544637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.212 [2024-07-25 01:17:09.551551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.212 [2024-07-25 01:17:09.551569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.212 [2024-07-25 01:17:09.561027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.212 [2024-07-25 01:17:09.561058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.212 [2024-07-25 01:17:09.569384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.212 [2024-07-25 01:17:09.569402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.212 [2024-07-25 01:17:09.578024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.212 [2024-07-25 01:17:09.578047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.212 [2024-07-25 01:17:09.587086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.212 [2024-07-25 01:17:09.587104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.212 [2024-07-25 01:17:09.596096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.212 [2024-07-25 01:17:09.596114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.212 [2024-07-25 01:17:09.602828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.212 [2024-07-25 01:17:09.602846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.212 00:16:47.212 Latency(us) 00:16:47.212 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:47.212 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:47.212 Nvme1n1 : 5.00 14958.99 116.87 0.00 0.00 8549.96 2364.99 57215.78 00:16:47.212 =================================================================================================================== 00:16:47.212 Total : 14958.99 116.87 0.00 0.00 8549.96 2364.99 57215.78 00:16:47.212 [2024-07-25 01:17:09.610582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.212 [2024-07-25 01:17:09.610597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.212 [2024-07-25 01:17:09.618605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.212 [2024-07-25 01:17:09.618618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.212 [2024-07-25 01:17:09.626627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.212 [2024-07-25 01:17:09.626638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.212 [2024-07-25 01:17:09.634660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.212 [2024-07-25 01:17:09.634678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.212 [2024-07-25 01:17:09.642674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.212 [2024-07-25 01:17:09.642686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.212 [2024-07-25 01:17:09.650693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.212 [2024-07-25 01:17:09.650709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.212 [2024-07-25 01:17:09.658717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.212 [2024-07-25 01:17:09.658728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.212 [2024-07-25 01:17:09.666737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.212 [2024-07-25 01:17:09.666748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.212 [2024-07-25 01:17:09.674759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.212 [2024-07-25 01:17:09.674769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.212 [2024-07-25 01:17:09.682782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.212 [2024-07-25 01:17:09.682792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.212 [2024-07-25 01:17:09.690803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.212 [2024-07-25 01:17:09.690813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.212 [2024-07-25 01:17:09.698829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.212 [2024-07-25 01:17:09.698843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.530 [2024-07-25 01:17:09.706847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.530 [2024-07-25 01:17:09.706857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.530 [2024-07-25 01:17:09.714865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.530 [2024-07-25 01:17:09.714875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.530 [2024-07-25 01:17:09.722886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.530 [2024-07-25 01:17:09.722896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.530 [2024-07-25 01:17:09.730912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.530 [2024-07-25 01:17:09.730923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.530 [2024-07-25 01:17:09.738932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.530 [2024-07-25 01:17:09.738943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.530 [2024-07-25 01:17:09.746951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.530 [2024-07-25 01:17:09.746961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.530 [2024-07-25 01:17:09.754973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.530 [2024-07-25 01:17:09.754982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.530 [2024-07-25 01:17:09.762995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.530 [2024-07-25 01:17:09.763007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.530 [2024-07-25 01:17:09.771016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.530 [2024-07-25 01:17:09.771027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.530 [2024-07-25 01:17:09.779035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.530 [2024-07-25 01:17:09.779049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.530 [2024-07-25 01:17:09.787076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.530 [2024-07-25 01:17:09.787094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (874415) - No such process 00:16:47.530 01:17:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 874415 00:16:47.530 01:17:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:47.530 01:17:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.530 01:17:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:47.530 01:17:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.530 01:17:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:47.530 01:17:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.530 01:17:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:47.530 delay0 00:16:47.530 01:17:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.530 01:17:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:47.530 01:17:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.530 01:17:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:47.530 01:17:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.530 01:17:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:47.530 EAL: No free 2048 kB hugepages reported on node 1 00:16:47.530 [2024-07-25 01:17:09.962207] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:54.115 Initializing NVMe Controllers 00:16:54.115 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:54.115 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:54.115 Initialization complete. Launching workers. 00:16:54.115 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 99 00:16:54.115 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 378, failed to submit 41 00:16:54.115 success 217, unsuccess 161, failed 0 00:16:54.115 01:17:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:54.115 01:17:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:16:54.115 01:17:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:54.115 01:17:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:16:54.115 01:17:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:54.115 01:17:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:16:54.115 01:17:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:54.115 01:17:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:54.115 rmmod nvme_tcp 00:16:54.115 rmmod nvme_fabrics 00:16:54.115 rmmod nvme_keyring 00:16:54.115 01:17:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:54.115 01:17:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:16:54.115 01:17:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:16:54.115 01:17:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 872441 ']' 00:16:54.115 01:17:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 872441 00:16:54.115 01:17:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 872441 ']' 00:16:54.115 01:17:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 872441 00:16:54.115 01:17:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:16:54.115 01:17:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:54.115 01:17:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 872441 00:16:54.115 01:17:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:54.115 01:17:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:54.115 01:17:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 872441' 00:16:54.115 killing process with pid 872441 00:16:54.115 01:17:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 872441 00:16:54.115 01:17:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 872441 00:16:54.116 01:17:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:54.116 01:17:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:54.116 01:17:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:54.116 01:17:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:54.116 01:17:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:54.116 01:17:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:54.116 01:17:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:54.116 01:17:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.027 01:17:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:56.027 00:16:56.027 real 0m31.812s 00:16:56.027 user 0m43.301s 00:16:56.027 sys 0m10.576s 00:16:56.027 01:17:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:56.027 01:17:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:56.027 ************************************ 00:16:56.027 END TEST nvmf_zcopy 00:16:56.027 ************************************ 00:16:56.287 01:17:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:56.287 01:17:18 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:56.287 01:17:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:56.287 01:17:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:56.287 01:17:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:56.287 ************************************ 00:16:56.287 START TEST nvmf_nmic 00:16:56.287 ************************************ 00:16:56.287 01:17:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:56.287 * Looking for test storage... 00:16:56.287 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:16:56.288 01:17:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:01.572 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:01.572 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:01.572 Found net devices under 0000:86:00.0: cvl_0_0 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:01.572 Found net devices under 0000:86:00.1: cvl_0_1 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:01.572 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:01.572 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:17:01.572 00:17:01.572 --- 10.0.0.2 ping statistics --- 00:17:01.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.572 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:01.572 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:01.572 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.375 ms 00:17:01.572 00:17:01.572 --- 10.0.0.1 ping statistics --- 00:17:01.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.572 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:17:01.572 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:01.573 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:01.573 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:01.573 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:01.573 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:01.573 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:01.573 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:01.573 01:17:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:17:01.573 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:01.573 01:17:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:01.573 01:17:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:01.573 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=879771 00:17:01.573 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 879771 00:17:01.573 01:17:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 879771 ']' 00:17:01.573 01:17:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.573 01:17:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:01.573 01:17:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.573 01:17:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:01.573 01:17:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:01.573 01:17:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:01.573 [2024-07-25 01:17:23.691614] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:17:01.573 [2024-07-25 01:17:23.691656] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:01.573 EAL: No free 2048 kB hugepages reported on node 1 00:17:01.573 [2024-07-25 01:17:23.747673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:01.573 [2024-07-25 01:17:23.829256] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:01.573 [2024-07-25 01:17:23.829291] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:01.573 [2024-07-25 01:17:23.829298] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:01.573 [2024-07-25 01:17:23.829304] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:01.573 [2024-07-25 01:17:23.829309] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:01.573 [2024-07-25 01:17:23.829353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:01.573 [2024-07-25 01:17:23.829369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:01.573 [2024-07-25 01:17:23.829460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:01.573 [2024-07-25 01:17:23.829461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:02.144 [2024-07-25 01:17:24.551901] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:02.144 Malloc0 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:02.144 [2024-07-25 01:17:24.603781] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:17:02.144 test case1: single bdev can't be used in multiple subsystems 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:02.144 [2024-07-25 01:17:24.627719] bdev.c:8075:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:17:02.144 [2024-07-25 01:17:24.627738] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:17:02.144 [2024-07-25 01:17:24.627745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.144 request: 00:17:02.144 { 00:17:02.144 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:17:02.144 "namespace": { 00:17:02.144 "bdev_name": "Malloc0", 00:17:02.144 "no_auto_visible": false 00:17:02.144 }, 00:17:02.144 "method": "nvmf_subsystem_add_ns", 00:17:02.144 "req_id": 1 00:17:02.144 } 00:17:02.144 Got JSON-RPC error response 00:17:02.144 response: 00:17:02.144 { 00:17:02.144 "code": -32602, 00:17:02.144 "message": "Invalid parameters" 00:17:02.144 } 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:17:02.144 Adding namespace failed - expected result. 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:17:02.144 test case2: host connect to nvmf target in multiple paths 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.144 01:17:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:02.404 [2024-07-25 01:17:24.639848] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:02.404 01:17:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.404 01:17:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:03.342 01:17:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:17:04.723 01:17:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:17:04.723 01:17:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:17:04.723 01:17:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:04.723 01:17:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:04.723 01:17:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:17:06.636 01:17:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:06.636 01:17:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:06.636 01:17:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:06.636 01:17:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:06.636 01:17:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:06.636 01:17:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:17:06.636 01:17:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:06.636 [global] 00:17:06.636 thread=1 00:17:06.636 invalidate=1 00:17:06.636 rw=write 00:17:06.636 time_based=1 00:17:06.636 runtime=1 00:17:06.636 ioengine=libaio 00:17:06.636 direct=1 00:17:06.636 bs=4096 00:17:06.636 iodepth=1 00:17:06.636 norandommap=0 00:17:06.636 numjobs=1 00:17:06.636 00:17:06.636 verify_dump=1 00:17:06.636 verify_backlog=512 00:17:06.636 verify_state_save=0 00:17:06.636 do_verify=1 00:17:06.636 verify=crc32c-intel 00:17:06.636 [job0] 00:17:06.636 filename=/dev/nvme0n1 00:17:06.636 Could not set queue depth (nvme0n1) 00:17:06.895 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:06.895 fio-3.35 00:17:06.895 Starting 1 thread 00:17:08.277 00:17:08.277 job0: (groupid=0, jobs=1): err= 0: pid=880851: Thu Jul 25 01:17:30 2024 00:17:08.277 read: IOPS=631, BW=2525KiB/s (2585kB/s)(2540KiB/1006msec) 00:17:08.277 slat (nsec): min=6616, max=53664, avg=19413.62, stdev=6585.49 00:17:08.277 clat (usec): min=439, max=41989, avg=983.28, stdev=3217.71 00:17:08.277 lat (usec): min=447, max=42012, avg=1002.70, stdev=3217.43 00:17:08.277 clat percentiles (usec): 00:17:08.277 | 1.00th=[ 529], 5.00th=[ 578], 10.00th=[ 619], 20.00th=[ 668], 00:17:08.277 | 30.00th=[ 685], 40.00th=[ 693], 50.00th=[ 725], 60.00th=[ 742], 00:17:08.277 | 70.00th=[ 766], 80.00th=[ 791], 90.00th=[ 816], 95.00th=[ 930], 00:17:08.277 | 99.00th=[ 1139], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:17:08.277 | 99.99th=[42206] 00:17:08.277 write: IOPS=1017, BW=4072KiB/s (4169kB/s)(4096KiB/1006msec); 0 zone resets 00:17:08.277 slat (usec): min=8, max=306, avg=11.47, stdev=10.16 00:17:08.277 clat (usec): min=223, max=997, avg=342.50, stdev=147.45 00:17:08.277 lat (usec): min=233, max=1026, avg=353.97, stdev=149.86 00:17:08.277 clat percentiles (usec): 00:17:08.277 | 1.00th=[ 227], 5.00th=[ 229], 10.00th=[ 231], 20.00th=[ 235], 00:17:08.277 | 30.00th=[ 249], 40.00th=[ 273], 50.00th=[ 285], 60.00th=[ 302], 00:17:08.277 | 70.00th=[ 347], 80.00th=[ 424], 90.00th=[ 611], 95.00th=[ 701], 00:17:08.277 | 99.00th=[ 775], 99.50th=[ 816], 99.90th=[ 996], 99.95th=[ 996], 00:17:08.277 | 99.99th=[ 996] 00:17:08.277 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=2 00:17:08.277 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:17:08.277 lat (usec) : 250=18.99%, 500=33.51%, 750=32.79%, 1000=13.32% 00:17:08.277 lat (msec) : 2=1.15%, 50=0.24% 00:17:08.277 cpu : usr=1.00%, sys=2.79%, ctx=1659, majf=0, minf=2 00:17:08.277 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:08.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.277 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.277 issued rwts: total=635,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.277 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:08.277 00:17:08.277 Run status group 0 (all jobs): 00:17:08.277 READ: bw=2525KiB/s (2585kB/s), 2525KiB/s-2525KiB/s (2585kB/s-2585kB/s), io=2540KiB (2601kB), run=1006-1006msec 00:17:08.277 WRITE: bw=4072KiB/s (4169kB/s), 4072KiB/s-4072KiB/s (4169kB/s-4169kB/s), io=4096KiB (4194kB), run=1006-1006msec 00:17:08.277 00:17:08.277 Disk stats (read/write): 00:17:08.277 nvme0n1: ios=648/1024, merge=0/0, ticks=604/347, in_queue=951, util=94.68% 00:17:08.277 01:17:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:08.277 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:17:08.277 01:17:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:08.277 01:17:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:17:08.277 01:17:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:08.277 01:17:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:08.277 01:17:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:08.277 01:17:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:08.277 01:17:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:17:08.277 01:17:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:17:08.277 01:17:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:17:08.277 01:17:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:08.277 01:17:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:17:08.277 01:17:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:08.277 01:17:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:17:08.277 01:17:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:08.277 01:17:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:08.277 rmmod nvme_tcp 00:17:08.277 rmmod nvme_fabrics 00:17:08.277 rmmod nvme_keyring 00:17:08.277 01:17:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:08.277 01:17:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:17:08.277 01:17:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:17:08.278 01:17:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 879771 ']' 00:17:08.278 01:17:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 879771 00:17:08.278 01:17:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 879771 ']' 00:17:08.278 01:17:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 879771 00:17:08.278 01:17:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:17:08.278 01:17:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:08.278 01:17:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 879771 00:17:08.278 01:17:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:08.278 01:17:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:08.278 01:17:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 879771' 00:17:08.278 killing process with pid 879771 00:17:08.278 01:17:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 879771 00:17:08.278 01:17:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 879771 00:17:08.538 01:17:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:08.538 01:17:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:08.538 01:17:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:08.538 01:17:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:08.538 01:17:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:08.538 01:17:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.538 01:17:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:08.538 01:17:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.117 01:17:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:11.117 00:17:11.117 real 0m14.374s 00:17:11.117 user 0m35.038s 00:17:11.117 sys 0m4.500s 00:17:11.117 01:17:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:11.117 01:17:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:11.117 ************************************ 00:17:11.117 END TEST nvmf_nmic 00:17:11.117 ************************************ 00:17:11.117 01:17:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:11.117 01:17:32 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:11.117 01:17:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:11.117 01:17:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:11.117 01:17:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:11.117 ************************************ 00:17:11.117 START TEST nvmf_fio_target 00:17:11.117 ************************************ 00:17:11.117 01:17:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:11.117 * Looking for test storage... 00:17:11.117 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:11.117 01:17:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:11.117 01:17:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:17:11.117 01:17:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:11.117 01:17:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:11.117 01:17:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:11.117 01:17:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:11.117 01:17:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:11.118 01:17:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:11.118 01:17:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:11.118 01:17:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:11.118 01:17:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:11.118 01:17:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:11.118 01:17:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:11.118 01:17:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:11.118 01:17:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:11.118 01:17:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:11.118 01:17:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:11.118 01:17:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:11.118 01:17:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:11.118 01:17:33 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:11.118 01:17:33 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:11.118 01:17:33 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:11.118 01:17:33 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.118 01:17:33 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.118 01:17:33 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.118 01:17:33 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:17:11.118 01:17:33 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.118 01:17:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:17:11.118 01:17:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:11.118 01:17:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:11.118 01:17:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:11.118 01:17:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:11.118 01:17:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:11.118 01:17:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:11.118 01:17:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:11.118 01:17:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:11.118 01:17:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:11.118 01:17:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:11.118 01:17:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:11.118 01:17:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:17:11.118 01:17:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:11.118 01:17:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:11.118 01:17:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:11.118 01:17:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:11.118 01:17:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:11.118 01:17:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.118 01:17:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:11.118 01:17:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.118 01:17:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:11.118 01:17:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:11.118 01:17:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:11.118 01:17:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:16.401 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:16.401 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:16.401 Found net devices under 0000:86:00.0: cvl_0_0 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:16.401 Found net devices under 0000:86:00.1: cvl_0_1 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:16.401 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:16.402 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:16.402 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:16.402 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:16.402 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:16.402 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:16.402 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:16.402 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:16.402 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:16.402 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:16.402 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:16.402 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:16.402 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:16.402 01:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:16.402 01:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:16.402 01:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:16.402 01:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:16.402 01:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:16.402 01:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:16.402 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:16.402 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:17:16.402 00:17:16.402 --- 10.0.0.2 ping statistics --- 00:17:16.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.402 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:17:16.402 01:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:16.402 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:16.402 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.378 ms 00:17:16.402 00:17:16.402 --- 10.0.0.1 ping statistics --- 00:17:16.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.402 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:17:16.402 01:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:16.402 01:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:17:16.402 01:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:16.402 01:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:16.402 01:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:16.402 01:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:16.402 01:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:16.402 01:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:16.402 01:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:16.402 01:17:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:17:16.402 01:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:16.402 01:17:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:16.402 01:17:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.402 01:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=884387 00:17:16.402 01:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 884387 00:17:16.402 01:17:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 884387 ']' 00:17:16.402 01:17:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.402 01:17:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:16.402 01:17:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.402 01:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:16.402 01:17:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:16.402 01:17:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.402 [2024-07-25 01:17:38.180006] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:17:16.402 [2024-07-25 01:17:38.180051] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:16.402 EAL: No free 2048 kB hugepages reported on node 1 00:17:16.402 [2024-07-25 01:17:38.234754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:16.402 [2024-07-25 01:17:38.315228] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:16.402 [2024-07-25 01:17:38.315261] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:16.402 [2024-07-25 01:17:38.315272] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:16.402 [2024-07-25 01:17:38.315278] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:16.402 [2024-07-25 01:17:38.315284] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:16.402 [2024-07-25 01:17:38.315325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:16.402 [2024-07-25 01:17:38.315421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:16.402 [2024-07-25 01:17:38.315438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:16.402 [2024-07-25 01:17:38.315440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.662 01:17:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:16.662 01:17:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:17:16.662 01:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:16.662 01:17:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:16.662 01:17:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.662 01:17:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:16.662 01:17:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:16.922 [2024-07-25 01:17:39.187646] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:16.922 01:17:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:16.922 01:17:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:17:17.182 01:17:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:17.182 01:17:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:17:17.182 01:17:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:17.442 01:17:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:17:17.442 01:17:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:17.702 01:17:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:17:17.702 01:17:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:17:17.702 01:17:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:17.962 01:17:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:17:17.962 01:17:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:18.222 01:17:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:17:18.222 01:17:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:18.482 01:17:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:17:18.482 01:17:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:17:18.482 01:17:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:18.742 01:17:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:18.742 01:17:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:19.002 01:17:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:19.002 01:17:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:19.262 01:17:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:19.262 [2024-07-25 01:17:41.693670] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.262 01:17:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:17:19.522 01:17:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:17:19.782 01:17:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:20.721 01:17:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:17:20.721 01:17:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:17:20.721 01:17:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:20.721 01:17:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:17:20.721 01:17:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:17:20.721 01:17:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:17:23.262 01:17:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:23.262 01:17:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:23.262 01:17:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:23.262 01:17:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:17:23.262 01:17:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:23.262 01:17:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:17:23.262 01:17:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:23.262 [global] 00:17:23.262 thread=1 00:17:23.262 invalidate=1 00:17:23.262 rw=write 00:17:23.262 time_based=1 00:17:23.262 runtime=1 00:17:23.262 ioengine=libaio 00:17:23.262 direct=1 00:17:23.262 bs=4096 00:17:23.262 iodepth=1 00:17:23.262 norandommap=0 00:17:23.262 numjobs=1 00:17:23.262 00:17:23.262 verify_dump=1 00:17:23.262 verify_backlog=512 00:17:23.262 verify_state_save=0 00:17:23.262 do_verify=1 00:17:23.262 verify=crc32c-intel 00:17:23.262 [job0] 00:17:23.262 filename=/dev/nvme0n1 00:17:23.262 [job1] 00:17:23.262 filename=/dev/nvme0n2 00:17:23.262 [job2] 00:17:23.262 filename=/dev/nvme0n3 00:17:23.262 [job3] 00:17:23.262 filename=/dev/nvme0n4 00:17:23.262 Could not set queue depth (nvme0n1) 00:17:23.262 Could not set queue depth (nvme0n2) 00:17:23.262 Could not set queue depth (nvme0n3) 00:17:23.262 Could not set queue depth (nvme0n4) 00:17:23.262 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:23.262 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:23.262 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:23.262 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:23.262 fio-3.35 00:17:23.262 Starting 4 threads 00:17:24.640 00:17:24.640 job0: (groupid=0, jobs=1): err= 0: pid=885753: Thu Jul 25 01:17:46 2024 00:17:24.640 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:17:24.640 slat (nsec): min=6401, max=15666, avg=7305.04, stdev=727.99 00:17:24.640 clat (usec): min=378, max=1013, avg=502.29, stdev=30.23 00:17:24.640 lat (usec): min=385, max=1020, avg=509.59, stdev=30.24 00:17:24.640 clat percentiles (usec): 00:17:24.640 | 1.00th=[ 445], 5.00th=[ 461], 10.00th=[ 474], 20.00th=[ 490], 00:17:24.640 | 30.00th=[ 494], 40.00th=[ 498], 50.00th=[ 502], 60.00th=[ 506], 00:17:24.640 | 70.00th=[ 510], 80.00th=[ 519], 90.00th=[ 529], 95.00th=[ 537], 00:17:24.640 | 99.00th=[ 562], 99.50th=[ 594], 99.90th=[ 840], 99.95th=[ 1012], 00:17:24.640 | 99.99th=[ 1012] 00:17:24.640 write: IOPS=1475, BW=5902KiB/s (6044kB/s)(5908KiB/1001msec); 0 zone resets 00:17:24.640 slat (nsec): min=9303, max=42943, avg=10928.00, stdev=2018.32 00:17:24.640 clat (usec): min=224, max=1081, avg=308.89, stdev=98.13 00:17:24.640 lat (usec): min=235, max=1110, avg=319.82, stdev=99.12 00:17:24.640 clat percentiles (usec): 00:17:24.640 | 1.00th=[ 229], 5.00th=[ 243], 10.00th=[ 247], 20.00th=[ 251], 00:17:24.640 | 30.00th=[ 253], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 289], 00:17:24.640 | 70.00th=[ 330], 80.00th=[ 343], 90.00th=[ 412], 95.00th=[ 519], 00:17:24.640 | 99.00th=[ 693], 99.50th=[ 816], 99.90th=[ 1012], 99.95th=[ 1090], 00:17:24.640 | 99.99th=[ 1090] 00:17:24.640 bw ( KiB/s): min= 6264, max= 6264, per=38.21%, avg=6264.00, stdev= 0.00, samples=1 00:17:24.640 iops : min= 1566, max= 1566, avg=1566.00, stdev= 0.00, samples=1 00:17:24.640 lat (usec) : 250=10.72%, 500=65.89%, 750=22.83%, 1000=0.44% 00:17:24.640 lat (msec) : 2=0.12% 00:17:24.640 cpu : usr=1.70%, sys=2.10%, ctx=2504, majf=0, minf=1 00:17:24.640 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:24.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:24.640 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:24.640 issued rwts: total=1024,1477,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:24.640 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:24.640 job1: (groupid=0, jobs=1): err= 0: pid=885764: Thu Jul 25 01:17:46 2024 00:17:24.640 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:17:24.640 slat (nsec): min=6330, max=14406, avg=7376.82, stdev=1012.68 00:17:24.640 clat (usec): min=443, max=1053, avg=581.86, stdev=68.05 00:17:24.640 lat (usec): min=450, max=1060, avg=589.24, stdev=68.04 00:17:24.640 clat percentiles (usec): 00:17:24.640 | 1.00th=[ 474], 5.00th=[ 494], 10.00th=[ 506], 20.00th=[ 537], 00:17:24.640 | 30.00th=[ 553], 40.00th=[ 562], 50.00th=[ 570], 60.00th=[ 578], 00:17:24.640 | 70.00th=[ 594], 80.00th=[ 635], 90.00th=[ 676], 95.00th=[ 709], 00:17:24.640 | 99.00th=[ 791], 99.50th=[ 807], 99.90th=[ 1020], 99.95th=[ 1057], 00:17:24.640 | 99.99th=[ 1057] 00:17:24.640 write: IOPS=1111, BW=4448KiB/s (4554kB/s)(4452KiB/1001msec); 0 zone resets 00:17:24.640 slat (nsec): min=9179, max=94553, avg=10947.94, stdev=3908.81 00:17:24.640 clat (usec): min=225, max=1343, avg=340.93, stdev=126.93 00:17:24.640 lat (usec): min=235, max=1371, avg=351.88, stdev=128.63 00:17:24.640 clat percentiles (usec): 00:17:24.640 | 1.00th=[ 227], 5.00th=[ 231], 10.00th=[ 233], 20.00th=[ 239], 00:17:24.640 | 30.00th=[ 251], 40.00th=[ 285], 50.00th=[ 330], 60.00th=[ 359], 00:17:24.640 | 70.00th=[ 379], 80.00th=[ 400], 90.00th=[ 437], 95.00th=[ 486], 00:17:24.640 | 99.00th=[ 930], 99.50th=[ 1090], 99.90th=[ 1205], 99.95th=[ 1352], 00:17:24.640 | 99.99th=[ 1352] 00:17:24.640 bw ( KiB/s): min= 4096, max= 4096, per=24.98%, avg=4096.00, stdev= 0.00, samples=1 00:17:24.640 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:24.640 lat (usec) : 250=15.21%, 500=38.42%, 750=44.31%, 1000=1.54% 00:17:24.640 lat (msec) : 2=0.51% 00:17:24.640 cpu : usr=0.70%, sys=2.40%, ctx=2138, majf=0, minf=2 00:17:24.640 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:24.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:24.640 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:24.640 issued rwts: total=1024,1113,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:24.640 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:24.640 job2: (groupid=0, jobs=1): err= 0: pid=885784: Thu Jul 25 01:17:46 2024 00:17:24.640 read: IOPS=20, BW=82.8KiB/s (84.8kB/s)(84.0KiB/1014msec) 00:17:24.640 slat (nsec): min=10912, max=23373, avg=21657.57, stdev=3534.13 00:17:24.640 clat (usec): min=1051, max=43135, avg=40224.32, stdev=8989.49 00:17:24.640 lat (usec): min=1074, max=43158, avg=40245.98, stdev=8989.25 00:17:24.640 clat percentiles (usec): 00:17:24.640 | 1.00th=[ 1057], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:17:24.640 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:24.640 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[42730], 00:17:24.640 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:17:24.640 | 99.99th=[43254] 00:17:24.640 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:17:24.640 slat (nsec): min=10842, max=37712, avg=12317.93, stdev=1929.07 00:17:24.640 clat (usec): min=278, max=1106, avg=314.00, stdev=105.96 00:17:24.640 lat (usec): min=290, max=1133, avg=326.31, stdev=107.20 00:17:24.640 clat percentiles (usec): 00:17:24.640 | 1.00th=[ 281], 5.00th=[ 281], 10.00th=[ 281], 20.00th=[ 285], 00:17:24.640 | 30.00th=[ 285], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 289], 00:17:24.640 | 70.00th=[ 293], 80.00th=[ 297], 90.00th=[ 326], 95.00th=[ 404], 00:17:24.640 | 99.00th=[ 938], 99.50th=[ 1012], 99.90th=[ 1106], 99.95th=[ 1106], 00:17:24.640 | 99.99th=[ 1106] 00:17:24.640 bw ( KiB/s): min= 4096, max= 4096, per=24.98%, avg=4096.00, stdev= 0.00, samples=1 00:17:24.640 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:24.640 lat (usec) : 500=92.68%, 750=1.13%, 1000=1.69% 00:17:24.641 lat (msec) : 2=0.75%, 50=3.75% 00:17:24.641 cpu : usr=0.10%, sys=0.79%, ctx=533, majf=0, minf=1 00:17:24.641 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:24.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:24.641 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:24.641 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:24.641 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:24.641 job3: (groupid=0, jobs=1): err= 0: pid=885789: Thu Jul 25 01:17:46 2024 00:17:24.641 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:17:24.641 slat (nsec): min=6726, max=24834, avg=7557.77, stdev=944.31 00:17:24.641 clat (usec): min=352, max=1020, avg=533.44, stdev=73.95 00:17:24.641 lat (usec): min=360, max=1027, avg=541.00, stdev=73.95 00:17:24.641 clat percentiles (usec): 00:17:24.641 | 1.00th=[ 383], 5.00th=[ 445], 10.00th=[ 482], 20.00th=[ 494], 00:17:24.641 | 30.00th=[ 498], 40.00th=[ 502], 50.00th=[ 515], 60.00th=[ 523], 00:17:24.641 | 70.00th=[ 529], 80.00th=[ 586], 90.00th=[ 635], 95.00th=[ 685], 00:17:24.641 | 99.00th=[ 766], 99.50th=[ 840], 99.90th=[ 979], 99.95th=[ 1020], 00:17:24.641 | 99.99th=[ 1020] 00:17:24.641 write: IOPS=1052, BW=4212KiB/s (4313kB/s)(4216KiB/1001msec); 0 zone resets 00:17:24.641 slat (usec): min=9, max=2230, avg=13.65, stdev=68.43 00:17:24.641 clat (usec): min=234, max=1616, avg=403.22, stdev=138.86 00:17:24.641 lat (usec): min=245, max=2707, avg=416.88, stdev=157.32 00:17:24.641 clat percentiles (usec): 00:17:24.641 | 1.00th=[ 249], 5.00th=[ 255], 10.00th=[ 265], 20.00th=[ 334], 00:17:24.641 | 30.00th=[ 338], 40.00th=[ 355], 50.00th=[ 379], 60.00th=[ 400], 00:17:24.641 | 70.00th=[ 424], 80.00th=[ 461], 90.00th=[ 502], 95.00th=[ 562], 00:17:24.641 | 99.00th=[ 1123], 99.50th=[ 1270], 99.90th=[ 1352], 99.95th=[ 1614], 00:17:24.641 | 99.99th=[ 1614] 00:17:24.641 bw ( KiB/s): min= 4096, max= 4096, per=24.98%, avg=4096.00, stdev= 0.00, samples=1 00:17:24.641 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:24.641 lat (usec) : 250=0.87%, 500=64.39%, 750=32.68%, 1000=1.20% 00:17:24.641 lat (msec) : 2=0.87% 00:17:24.641 cpu : usr=0.90%, sys=2.30%, ctx=2081, majf=0, minf=1 00:17:24.641 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:24.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:24.641 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:24.641 issued rwts: total=1024,1054,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:24.641 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:24.641 00:17:24.641 Run status group 0 (all jobs): 00:17:24.641 READ: bw=11.9MiB/s (12.5MB/s), 82.8KiB/s-4092KiB/s (84.8kB/s-4190kB/s), io=12.1MiB (12.7MB), run=1001-1014msec 00:17:24.641 WRITE: bw=16.0MiB/s (16.8MB/s), 2020KiB/s-5902KiB/s (2068kB/s-6044kB/s), io=16.2MiB (17.0MB), run=1001-1014msec 00:17:24.641 00:17:24.641 Disk stats (read/write): 00:17:24.641 nvme0n1: ios=1049/1036, merge=0/0, ticks=1475/305, in_queue=1780, util=98.10% 00:17:24.641 nvme0n2: ios=862/1024, merge=0/0, ticks=636/346, in_queue=982, util=98.47% 00:17:24.641 nvme0n3: ios=16/512, merge=0/0, ticks=676/157, in_queue=833, util=88.94% 00:17:24.641 nvme0n4: ios=831/1024, merge=0/0, ticks=634/403, in_queue=1037, util=98.42% 00:17:24.641 01:17:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:17:24.641 [global] 00:17:24.641 thread=1 00:17:24.641 invalidate=1 00:17:24.641 rw=randwrite 00:17:24.641 time_based=1 00:17:24.641 runtime=1 00:17:24.641 ioengine=libaio 00:17:24.641 direct=1 00:17:24.641 bs=4096 00:17:24.641 iodepth=1 00:17:24.641 norandommap=0 00:17:24.641 numjobs=1 00:17:24.641 00:17:24.641 verify_dump=1 00:17:24.641 verify_backlog=512 00:17:24.641 verify_state_save=0 00:17:24.641 do_verify=1 00:17:24.641 verify=crc32c-intel 00:17:24.641 [job0] 00:17:24.641 filename=/dev/nvme0n1 00:17:24.641 [job1] 00:17:24.641 filename=/dev/nvme0n2 00:17:24.641 [job2] 00:17:24.641 filename=/dev/nvme0n3 00:17:24.641 [job3] 00:17:24.641 filename=/dev/nvme0n4 00:17:24.641 Could not set queue depth (nvme0n1) 00:17:24.641 Could not set queue depth (nvme0n2) 00:17:24.641 Could not set queue depth (nvme0n3) 00:17:24.641 Could not set queue depth (nvme0n4) 00:17:24.641 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:24.641 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:24.641 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:24.641 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:24.641 fio-3.35 00:17:24.641 Starting 4 threads 00:17:26.021 00:17:26.021 job0: (groupid=0, jobs=1): err= 0: pid=886215: Thu Jul 25 01:17:48 2024 00:17:26.021 read: IOPS=665, BW=2660KiB/s (2724kB/s)(2668KiB/1003msec) 00:17:26.021 slat (nsec): min=6193, max=40028, avg=7479.77, stdev=2254.58 00:17:26.021 clat (usec): min=350, max=42992, avg=1009.29, stdev=4447.68 00:17:26.021 lat (usec): min=356, max=43002, avg=1016.77, stdev=4448.36 00:17:26.021 clat percentiles (usec): 00:17:26.021 | 1.00th=[ 371], 5.00th=[ 416], 10.00th=[ 465], 20.00th=[ 490], 00:17:26.021 | 30.00th=[ 523], 40.00th=[ 529], 50.00th=[ 537], 60.00th=[ 537], 00:17:26.022 | 70.00th=[ 545], 80.00th=[ 545], 90.00th=[ 553], 95.00th=[ 562], 00:17:26.022 | 99.00th=[41157], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:17:26.022 | 99.99th=[43254] 00:17:26.022 write: IOPS=1020, BW=4084KiB/s (4182kB/s)(4096KiB/1003msec); 0 zone resets 00:17:26.022 slat (nsec): min=8899, max=38218, avg=10683.79, stdev=2412.86 00:17:26.022 clat (usec): min=225, max=1167, avg=302.00, stdev=95.18 00:17:26.022 lat (usec): min=234, max=1178, avg=312.69, stdev=96.20 00:17:26.022 clat percentiles (usec): 00:17:26.022 | 1.00th=[ 227], 5.00th=[ 231], 10.00th=[ 233], 20.00th=[ 237], 00:17:26.022 | 30.00th=[ 243], 40.00th=[ 260], 50.00th=[ 273], 60.00th=[ 306], 00:17:26.022 | 70.00th=[ 334], 80.00th=[ 343], 90.00th=[ 371], 95.00th=[ 449], 00:17:26.022 | 99.00th=[ 717], 99.50th=[ 824], 99.90th=[ 1020], 99.95th=[ 1172], 00:17:26.022 | 99.99th=[ 1172] 00:17:26.022 bw ( KiB/s): min= 2280, max= 5900, per=24.54%, avg=4090.00, stdev=2559.73, samples=2 00:17:26.022 iops : min= 570, max= 1475, avg=1022.50, stdev=639.93, samples=2 00:17:26.022 lat (usec) : 250=20.93%, 500=46.48%, 750=31.46%, 1000=0.53% 00:17:26.022 lat (msec) : 2=0.12%, 50=0.47% 00:17:26.022 cpu : usr=1.10%, sys=1.40%, ctx=1693, majf=0, minf=1 00:17:26.022 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:26.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:26.022 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:26.022 issued rwts: total=667,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:26.022 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:26.022 job1: (groupid=0, jobs=1): err= 0: pid=886225: Thu Jul 25 01:17:48 2024 00:17:26.022 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:17:26.022 slat (nsec): min=6284, max=29254, avg=8793.25, stdev=1798.72 00:17:26.022 clat (usec): min=333, max=1395, avg=533.28, stdev=113.39 00:17:26.022 lat (usec): min=342, max=1402, avg=542.07, stdev=113.49 00:17:26.022 clat percentiles (usec): 00:17:26.022 | 1.00th=[ 355], 5.00th=[ 404], 10.00th=[ 445], 20.00th=[ 478], 00:17:26.022 | 30.00th=[ 490], 40.00th=[ 498], 50.00th=[ 506], 60.00th=[ 519], 00:17:26.022 | 70.00th=[ 529], 80.00th=[ 562], 90.00th=[ 676], 95.00th=[ 734], 00:17:26.022 | 99.00th=[ 1004], 99.50th=[ 1012], 99.90th=[ 1385], 99.95th=[ 1401], 00:17:26.022 | 99.99th=[ 1401] 00:17:26.022 write: IOPS=1233, BW=4935KiB/s (5054kB/s)(4940KiB/1001msec); 0 zone resets 00:17:26.022 slat (usec): min=6, max=482, avg=10.71, stdev=13.90 00:17:26.022 clat (usec): min=221, max=1390, avg=344.24, stdev=146.81 00:17:26.022 lat (usec): min=230, max=1462, avg=354.95, stdev=150.02 00:17:26.022 clat percentiles (usec): 00:17:26.022 | 1.00th=[ 227], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 249], 00:17:26.022 | 30.00th=[ 260], 40.00th=[ 269], 50.00th=[ 293], 60.00th=[ 334], 00:17:26.022 | 70.00th=[ 343], 80.00th=[ 379], 90.00th=[ 603], 95.00th=[ 619], 00:17:26.022 | 99.00th=[ 914], 99.50th=[ 1074], 99.90th=[ 1287], 99.95th=[ 1385], 00:17:26.022 | 99.99th=[ 1385] 00:17:26.022 bw ( KiB/s): min= 4830, max= 4830, per=28.98%, avg=4830.00, stdev= 0.00, samples=1 00:17:26.022 iops : min= 1207, max= 1207, avg=1207.00, stdev= 0.00, samples=1 00:17:26.022 lat (usec) : 250=11.29%, 500=56.13%, 750=29.39%, 1000=2.39% 00:17:26.022 lat (msec) : 2=0.80% 00:17:26.022 cpu : usr=1.30%, sys=2.10%, ctx=2263, majf=0, minf=1 00:17:26.022 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:26.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:26.022 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:26.022 issued rwts: total=1024,1235,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:26.022 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:26.022 job2: (groupid=0, jobs=1): err= 0: pid=886236: Thu Jul 25 01:17:48 2024 00:17:26.022 read: IOPS=1024, BW=4096KiB/s (4194kB/s)(4096KiB/1000msec) 00:17:26.022 slat (nsec): min=6611, max=37571, avg=8587.39, stdev=1597.49 00:17:26.022 clat (usec): min=346, max=964, avg=529.75, stdev=55.98 00:17:26.022 lat (usec): min=355, max=976, avg=538.34, stdev=56.36 00:17:26.022 clat percentiles (usec): 00:17:26.022 | 1.00th=[ 392], 5.00th=[ 474], 10.00th=[ 494], 20.00th=[ 502], 00:17:26.022 | 30.00th=[ 510], 40.00th=[ 515], 50.00th=[ 523], 60.00th=[ 529], 00:17:26.022 | 70.00th=[ 537], 80.00th=[ 545], 90.00th=[ 570], 95.00th=[ 594], 00:17:26.022 | 99.00th=[ 758], 99.50th=[ 766], 99.90th=[ 873], 99.95th=[ 963], 00:17:26.022 | 99.99th=[ 963] 00:17:26.022 write: IOPS=1490, BW=5962KiB/s (6105kB/s)(5968KiB/1001msec); 0 zone resets 00:17:26.022 slat (nsec): min=10217, max=61770, avg=12549.46, stdev=2532.15 00:17:26.022 clat (usec): min=220, max=991, avg=282.95, stdev=80.99 00:17:26.022 lat (usec): min=233, max=1004, avg=295.50, stdev=81.69 00:17:26.022 clat percentiles (usec): 00:17:26.022 | 1.00th=[ 225], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 239], 00:17:26.022 | 30.00th=[ 245], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 262], 00:17:26.022 | 70.00th=[ 273], 80.00th=[ 306], 90.00th=[ 367], 95.00th=[ 445], 00:17:26.022 | 99.00th=[ 676], 99.50th=[ 766], 99.90th=[ 848], 99.95th=[ 996], 00:17:26.022 | 99.99th=[ 996] 00:17:26.022 bw ( KiB/s): min= 5852, max= 5852, per=35.11%, avg=5852.00, stdev= 0.00, samples=1 00:17:26.022 iops : min= 1463, max= 1463, avg=1463.00, stdev= 0.00, samples=1 00:17:26.022 lat (usec) : 250=23.93%, 500=40.74%, 750=34.42%, 1000=0.91% 00:17:26.022 cpu : usr=2.70%, sys=3.70%, ctx=2517, majf=0, minf=2 00:17:26.022 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:26.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:26.022 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:26.022 issued rwts: total=1024,1492,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:26.022 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:26.022 job3: (groupid=0, jobs=1): err= 0: pid=886241: Thu Jul 25 01:17:48 2024 00:17:26.022 read: IOPS=20, BW=82.1KiB/s (84.1kB/s)(84.0KiB/1023msec) 00:17:26.022 slat (nsec): min=9821, max=22821, avg=21171.33, stdev=3719.77 00:17:26.022 clat (usec): min=1017, max=43028, avg=40158.25, stdev=8987.16 00:17:26.022 lat (usec): min=1039, max=43050, avg=40179.42, stdev=8986.90 00:17:26.022 clat percentiles (usec): 00:17:26.022 | 1.00th=[ 1020], 5.00th=[40633], 10.00th=[41157], 20.00th=[41681], 00:17:26.022 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:26.022 | 70.00th=[42206], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:17:26.022 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:17:26.022 | 99.99th=[43254] 00:17:26.022 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:17:26.022 slat (nsec): min=9381, max=37732, avg=11713.99, stdev=3996.42 00:17:26.022 clat (usec): min=233, max=910, avg=334.82, stdev=143.60 00:17:26.022 lat (usec): min=243, max=922, avg=346.54, stdev=146.53 00:17:26.022 clat percentiles (usec): 00:17:26.022 | 1.00th=[ 239], 5.00th=[ 241], 10.00th=[ 243], 20.00th=[ 249], 00:17:26.022 | 30.00th=[ 255], 40.00th=[ 265], 50.00th=[ 277], 60.00th=[ 293], 00:17:26.022 | 70.00th=[ 330], 80.00th=[ 375], 90.00th=[ 457], 95.00th=[ 742], 00:17:26.022 | 99.00th=[ 848], 99.50th=[ 898], 99.90th=[ 914], 99.95th=[ 914], 00:17:26.022 | 99.99th=[ 914] 00:17:26.022 bw ( KiB/s): min= 4087, max= 4087, per=24.52%, avg=4087.00, stdev= 0.00, samples=1 00:17:26.022 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:17:26.022 lat (usec) : 250=22.70%, 500=65.29%, 750=3.56%, 1000=4.50% 00:17:26.022 lat (msec) : 2=0.19%, 50=3.75% 00:17:26.022 cpu : usr=0.10%, sys=0.68%, ctx=534, majf=0, minf=1 00:17:26.022 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:26.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:26.022 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:26.022 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:26.022 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:26.022 00:17:26.022 Run status group 0 (all jobs): 00:17:26.022 READ: bw=10.4MiB/s (11.0MB/s), 82.1KiB/s-4096KiB/s (84.1kB/s-4194kB/s), io=10.7MiB (11.2MB), run=1000-1023msec 00:17:26.022 WRITE: bw=16.3MiB/s (17.1MB/s), 2002KiB/s-5962KiB/s (2050kB/s-6105kB/s), io=16.7MiB (17.5MB), run=1001-1023msec 00:17:26.022 00:17:26.022 Disk stats (read/write): 00:17:26.022 nvme0n1: ios=690/1024, merge=0/0, ticks=1478/297, in_queue=1775, util=98.80% 00:17:26.022 nvme0n2: ios=930/1024, merge=0/0, ticks=1408/328, in_queue=1736, util=97.15% 00:17:26.022 nvme0n3: ios=1004/1024, merge=0/0, ticks=512/291, in_queue=803, util=88.95% 00:17:26.022 nvme0n4: ios=16/512, merge=0/0, ticks=633/170, in_queue=803, util=89.60% 00:17:26.022 01:17:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:17:26.022 [global] 00:17:26.022 thread=1 00:17:26.022 invalidate=1 00:17:26.022 rw=write 00:17:26.022 time_based=1 00:17:26.022 runtime=1 00:17:26.022 ioengine=libaio 00:17:26.022 direct=1 00:17:26.022 bs=4096 00:17:26.022 iodepth=128 00:17:26.022 norandommap=0 00:17:26.022 numjobs=1 00:17:26.022 00:17:26.022 verify_dump=1 00:17:26.022 verify_backlog=512 00:17:26.022 verify_state_save=0 00:17:26.022 do_verify=1 00:17:26.022 verify=crc32c-intel 00:17:26.022 [job0] 00:17:26.022 filename=/dev/nvme0n1 00:17:26.022 [job1] 00:17:26.022 filename=/dev/nvme0n2 00:17:26.022 [job2] 00:17:26.022 filename=/dev/nvme0n3 00:17:26.022 [job3] 00:17:26.022 filename=/dev/nvme0n4 00:17:26.022 Could not set queue depth (nvme0n1) 00:17:26.022 Could not set queue depth (nvme0n2) 00:17:26.022 Could not set queue depth (nvme0n3) 00:17:26.022 Could not set queue depth (nvme0n4) 00:17:26.282 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:26.282 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:26.282 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:26.282 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:26.282 fio-3.35 00:17:26.282 Starting 4 threads 00:17:27.664 00:17:27.664 job0: (groupid=0, jobs=1): err= 0: pid=886669: Thu Jul 25 01:17:49 2024 00:17:27.664 read: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:17:27.664 slat (nsec): min=966, max=34276k, avg=123596.62, stdev=1035581.41 00:17:27.664 clat (usec): min=1811, max=65316, avg=16139.06, stdev=10743.81 00:17:27.664 lat (usec): min=1817, max=65330, avg=16262.66, stdev=10829.64 00:17:27.664 clat percentiles (usec): 00:17:27.664 | 1.00th=[ 3228], 5.00th=[ 6980], 10.00th=[ 7701], 20.00th=[ 8717], 00:17:27.664 | 30.00th=[ 9634], 40.00th=[10159], 50.00th=[10683], 60.00th=[11863], 00:17:27.664 | 70.00th=[17695], 80.00th=[26346], 90.00th=[33162], 95.00th=[43254], 00:17:27.664 | 99.00th=[46400], 99.50th=[47973], 99.90th=[49021], 99.95th=[51643], 00:17:27.664 | 99.99th=[65274] 00:17:27.664 write: IOPS=4249, BW=16.6MiB/s (17.4MB/s)(16.7MiB/1006msec); 0 zone resets 00:17:27.664 slat (nsec): min=1794, max=10342k, avg=110138.76, stdev=578846.65 00:17:27.664 clat (usec): min=1595, max=45483, avg=14246.28, stdev=7190.91 00:17:27.664 lat (usec): min=1605, max=45492, avg=14356.41, stdev=7227.50 00:17:27.664 clat percentiles (usec): 00:17:27.664 | 1.00th=[ 5080], 5.00th=[ 6718], 10.00th=[ 7963], 20.00th=[ 9110], 00:17:27.664 | 30.00th=[ 9634], 40.00th=[10290], 50.00th=[11863], 60.00th=[13435], 00:17:27.664 | 70.00th=[16188], 80.00th=[19006], 90.00th=[23462], 95.00th=[30540], 00:17:27.664 | 99.00th=[39584], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:17:27.664 | 99.99th=[45351] 00:17:27.664 bw ( KiB/s): min=14968, max=18216, per=25.53%, avg=16592.00, stdev=2296.68, samples=2 00:17:27.664 iops : min= 3742, max= 4554, avg=4148.00, stdev=574.17, samples=2 00:17:27.664 lat (msec) : 2=0.13%, 4=0.66%, 10=34.51%, 20=44.27%, 50=20.38% 00:17:27.664 lat (msec) : 100=0.05% 00:17:27.664 cpu : usr=1.19%, sys=3.08%, ctx=593, majf=0, minf=1 00:17:27.664 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:17:27.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.664 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:27.664 issued rwts: total=4096,4275,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.664 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.664 job1: (groupid=0, jobs=1): err= 0: pid=886682: Thu Jul 25 01:17:49 2024 00:17:27.664 read: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec) 00:17:27.664 slat (nsec): min=1635, max=24307k, avg=136799.05, stdev=999708.37 00:17:27.664 clat (usec): min=4419, max=62892, avg=16876.50, stdev=10481.01 00:17:27.664 lat (usec): min=4424, max=69990, avg=17013.30, stdev=10561.43 00:17:27.664 clat percentiles (usec): 00:17:27.664 | 1.00th=[ 6521], 5.00th=[ 7177], 10.00th=[ 8717], 20.00th=[ 9503], 00:17:27.664 | 30.00th=[10683], 40.00th=[11469], 50.00th=[12780], 60.00th=[14353], 00:17:27.664 | 70.00th=[20055], 80.00th=[22938], 90.00th=[30278], 95.00th=[38011], 00:17:27.664 | 99.00th=[58459], 99.50th=[58459], 99.90th=[62653], 99.95th=[62653], 00:17:27.664 | 99.99th=[62653] 00:17:27.664 write: IOPS=3576, BW=14.0MiB/s (14.6MB/s)(14.1MiB/1008msec); 0 zone resets 00:17:27.664 slat (usec): min=2, max=21964, avg=137.74, stdev=856.40 00:17:27.664 clat (usec): min=2190, max=65921, avg=18460.93, stdev=9894.04 00:17:27.664 lat (usec): min=5927, max=68704, avg=18598.67, stdev=9948.11 00:17:27.664 clat percentiles (usec): 00:17:27.664 | 1.00th=[ 8225], 5.00th=[ 9896], 10.00th=[10552], 20.00th=[11731], 00:17:27.664 | 30.00th=[13042], 40.00th=[14615], 50.00th=[15664], 60.00th=[17433], 00:17:27.664 | 70.00th=[19530], 80.00th=[21890], 90.00th=[27657], 95.00th=[35390], 00:17:27.664 | 99.00th=[62653], 99.50th=[64750], 99.90th=[65799], 99.95th=[65799], 00:17:27.664 | 99.99th=[65799] 00:17:27.664 bw ( KiB/s): min=14216, max=14456, per=22.06%, avg=14336.00, stdev=169.71, samples=2 00:17:27.664 iops : min= 3554, max= 3614, avg=3584.00, stdev=42.43, samples=2 00:17:27.664 lat (msec) : 4=0.01%, 10=13.90%, 20=57.81%, 50=25.75%, 100=2.53% 00:17:27.664 cpu : usr=2.38%, sys=3.38%, ctx=428, majf=0, minf=1 00:17:27.664 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:17:27.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.664 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:27.664 issued rwts: total=3584,3605,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.664 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.664 job2: (groupid=0, jobs=1): err= 0: pid=886700: Thu Jul 25 01:17:49 2024 00:17:27.664 read: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:17:27.664 slat (nsec): min=1019, max=17247k, avg=110920.21, stdev=814809.49 00:17:27.664 clat (usec): min=3072, max=43215, avg=16752.06, stdev=7131.36 00:17:27.664 lat (usec): min=3076, max=43240, avg=16862.98, stdev=7183.19 00:17:27.664 clat percentiles (usec): 00:17:27.664 | 1.00th=[ 5145], 5.00th=[ 9241], 10.00th=[10028], 20.00th=[11076], 00:17:27.664 | 30.00th=[11994], 40.00th=[12911], 50.00th=[14091], 60.00th=[16712], 00:17:27.664 | 70.00th=[19006], 80.00th=[22938], 90.00th=[27132], 95.00th=[31851], 00:17:27.664 | 99.00th=[34341], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:17:27.664 | 99.99th=[43254] 00:17:27.664 write: IOPS=4245, BW=16.6MiB/s (17.4MB/s)(16.7MiB/1006msec); 0 zone resets 00:17:27.664 slat (nsec): min=1802, max=11675k, avg=100080.46, stdev=608219.72 00:17:27.664 clat (usec): min=1162, max=38801, avg=13851.00, stdev=5736.51 00:17:27.664 lat (usec): min=1174, max=38805, avg=13951.08, stdev=5757.01 00:17:27.664 clat percentiles (usec): 00:17:27.664 | 1.00th=[ 3294], 5.00th=[ 6718], 10.00th=[ 8160], 20.00th=[ 9372], 00:17:27.664 | 30.00th=[10159], 40.00th=[11207], 50.00th=[12518], 60.00th=[14222], 00:17:27.664 | 70.00th=[16057], 80.00th=[18220], 90.00th=[21627], 95.00th=[25560], 00:17:27.664 | 99.00th=[31065], 99.50th=[36963], 99.90th=[38536], 99.95th=[39060], 00:17:27.664 | 99.99th=[39060] 00:17:27.664 bw ( KiB/s): min=12672, max=20480, per=25.51%, avg=16576.00, stdev=5521.09, samples=2 00:17:27.664 iops : min= 3168, max= 5120, avg=4144.00, stdev=1380.27, samples=2 00:17:27.664 lat (msec) : 2=0.06%, 4=1.05%, 10=17.34%, 20=61.42%, 50=20.13% 00:17:27.664 cpu : usr=1.89%, sys=3.18%, ctx=494, majf=0, minf=1 00:17:27.664 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:17:27.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.664 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:27.664 issued rwts: total=4096,4271,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.664 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.664 job3: (groupid=0, jobs=1): err= 0: pid=886702: Thu Jul 25 01:17:49 2024 00:17:27.665 read: IOPS=4027, BW=15.7MiB/s (16.5MB/s)(16.0MiB/1017msec) 00:17:27.665 slat (nsec): min=1087, max=21807k, avg=102290.39, stdev=769416.14 00:17:27.665 clat (usec): min=5344, max=37187, avg=14254.53, stdev=5430.35 00:17:27.665 lat (usec): min=5357, max=37194, avg=14356.82, stdev=5468.06 00:17:27.665 clat percentiles (usec): 00:17:27.665 | 1.00th=[ 7046], 5.00th=[ 8029], 10.00th=[ 9110], 20.00th=[10159], 00:17:27.665 | 30.00th=[11207], 40.00th=[11731], 50.00th=[12518], 60.00th=[13960], 00:17:27.665 | 70.00th=[15664], 80.00th=[18220], 90.00th=[21365], 95.00th=[25297], 00:17:27.665 | 99.00th=[31065], 99.50th=[34341], 99.90th=[36439], 99.95th=[36439], 00:17:27.665 | 99.99th=[36963] 00:17:27.665 write: IOPS=4299, BW=16.8MiB/s (17.6MB/s)(17.1MiB/1017msec); 0 zone resets 00:17:27.665 slat (nsec): min=1939, max=14896k, avg=109257.77, stdev=666422.78 00:17:27.665 clat (usec): min=1105, max=38775, avg=16128.59, stdev=7453.28 00:17:27.665 lat (usec): min=3816, max=38779, avg=16237.85, stdev=7478.25 00:17:27.665 clat percentiles (usec): 00:17:27.665 | 1.00th=[ 5538], 5.00th=[ 6783], 10.00th=[ 8356], 20.00th=[ 9503], 00:17:27.665 | 30.00th=[10814], 40.00th=[11994], 50.00th=[13960], 60.00th=[16450], 00:17:27.665 | 70.00th=[19530], 80.00th=[23725], 90.00th=[27132], 95.00th=[30016], 00:17:27.665 | 99.00th=[36963], 99.50th=[37487], 99.90th=[38536], 99.95th=[38536], 00:17:27.665 | 99.99th=[38536] 00:17:27.665 bw ( KiB/s): min=16840, max=17128, per=26.13%, avg=16984.00, stdev=203.65, samples=2 00:17:27.665 iops : min= 4210, max= 4282, avg=4246.00, stdev=50.91, samples=2 00:17:27.665 lat (msec) : 2=0.02%, 4=0.14%, 10=21.50%, 20=56.61%, 50=21.73% 00:17:27.665 cpu : usr=2.36%, sys=3.64%, ctx=565, majf=0, minf=1 00:17:27.665 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:17:27.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.665 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:27.665 issued rwts: total=4096,4373,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.665 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.665 00:17:27.665 Run status group 0 (all jobs): 00:17:27.665 READ: bw=61.0MiB/s (63.9MB/s), 13.9MiB/s-15.9MiB/s (14.6MB/s-16.7MB/s), io=62.0MiB (65.0MB), run=1006-1017msec 00:17:27.665 WRITE: bw=63.5MiB/s (66.6MB/s), 14.0MiB/s-16.8MiB/s (14.6MB/s-17.6MB/s), io=64.5MiB (67.7MB), run=1006-1017msec 00:17:27.665 00:17:27.665 Disk stats (read/write): 00:17:27.665 nvme0n1: ios=3122/3439, merge=0/0, ticks=33677/28030, in_queue=61707, util=88.38% 00:17:27.665 nvme0n2: ios=3026/3072, merge=0/0, ticks=25053/27959, in_queue=53012, util=99.39% 00:17:27.665 nvme0n3: ios=3584/3954, merge=0/0, ticks=41528/43484, in_queue=85012, util=88.96% 00:17:27.665 nvme0n4: ios=3397/3584, merge=0/0, ticks=45922/53130, in_queue=99052, util=98.95% 00:17:27.665 01:17:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:17:27.665 [global] 00:17:27.665 thread=1 00:17:27.665 invalidate=1 00:17:27.665 rw=randwrite 00:17:27.665 time_based=1 00:17:27.665 runtime=1 00:17:27.665 ioengine=libaio 00:17:27.665 direct=1 00:17:27.665 bs=4096 00:17:27.665 iodepth=128 00:17:27.665 norandommap=0 00:17:27.665 numjobs=1 00:17:27.665 00:17:27.665 verify_dump=1 00:17:27.665 verify_backlog=512 00:17:27.665 verify_state_save=0 00:17:27.665 do_verify=1 00:17:27.665 verify=crc32c-intel 00:17:27.665 [job0] 00:17:27.665 filename=/dev/nvme0n1 00:17:27.665 [job1] 00:17:27.665 filename=/dev/nvme0n2 00:17:27.665 [job2] 00:17:27.665 filename=/dev/nvme0n3 00:17:27.665 [job3] 00:17:27.665 filename=/dev/nvme0n4 00:17:27.665 Could not set queue depth (nvme0n1) 00:17:27.665 Could not set queue depth (nvme0n2) 00:17:27.665 Could not set queue depth (nvme0n3) 00:17:27.665 Could not set queue depth (nvme0n4) 00:17:27.925 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:27.925 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:27.925 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:27.925 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:27.925 fio-3.35 00:17:27.925 Starting 4 threads 00:17:29.328 00:17:29.328 job0: (groupid=0, jobs=1): err= 0: pid=887077: Thu Jul 25 01:17:51 2024 00:17:29.328 read: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:17:29.328 slat (nsec): min=1597, max=25479k, avg=114976.92, stdev=844017.80 00:17:29.328 clat (usec): min=6994, max=61588, avg=15753.69, stdev=8238.23 00:17:29.328 lat (usec): min=7001, max=61612, avg=15868.67, stdev=8306.07 00:17:29.328 clat percentiles (usec): 00:17:29.328 | 1.00th=[ 6980], 5.00th=[ 8455], 10.00th=[ 9372], 20.00th=[10421], 00:17:29.328 | 30.00th=[11469], 40.00th=[12256], 50.00th=[13173], 60.00th=[14091], 00:17:29.328 | 70.00th=[15795], 80.00th=[18744], 90.00th=[24773], 95.00th=[34866], 00:17:29.328 | 99.00th=[49021], 99.50th=[49021], 99.90th=[50594], 99.95th=[54789], 00:17:29.328 | 99.99th=[61604] 00:17:29.328 write: IOPS=4528, BW=17.7MiB/s (18.6MB/s)(17.8MiB/1006msec); 0 zone resets 00:17:29.328 slat (usec): min=2, max=11499, avg=110.82, stdev=652.60 00:17:29.328 clat (usec): min=1183, max=39430, avg=13843.28, stdev=5982.54 00:17:29.328 lat (usec): min=4187, max=39434, avg=13954.10, stdev=6011.73 00:17:29.328 clat percentiles (usec): 00:17:29.328 | 1.00th=[ 5145], 5.00th=[ 6980], 10.00th=[ 8094], 20.00th=[ 8979], 00:17:29.328 | 30.00th=[ 9765], 40.00th=[11076], 50.00th=[12387], 60.00th=[13435], 00:17:29.328 | 70.00th=[16188], 80.00th=[18482], 90.00th=[22414], 95.00th=[25035], 00:17:29.328 | 99.00th=[33424], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:17:29.328 | 99.99th=[39584] 00:17:29.328 bw ( KiB/s): min=15584, max=19840, per=28.21%, avg=17712.00, stdev=3009.45, samples=2 00:17:29.328 iops : min= 3896, max= 4960, avg=4428.00, stdev=752.36, samples=2 00:17:29.328 lat (msec) : 2=0.01%, 4=0.01%, 10=23.90%, 20=59.99%, 50=15.93% 00:17:29.328 lat (msec) : 100=0.16% 00:17:29.328 cpu : usr=3.98%, sys=5.17%, ctx=361, majf=0, minf=1 00:17:29.328 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:17:29.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:29.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:29.328 issued rwts: total=4096,4556,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:29.328 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:29.328 job1: (groupid=0, jobs=1): err= 0: pid=887078: Thu Jul 25 01:17:51 2024 00:17:29.328 read: IOPS=3296, BW=12.9MiB/s (13.5MB/s)(13.0MiB/1006msec) 00:17:29.328 slat (nsec): min=1054, max=28868k, avg=163928.39, stdev=1420583.23 00:17:29.328 clat (usec): min=1899, max=69521, avg=21955.19, stdev=12973.33 00:17:29.328 lat (usec): min=1902, max=69540, avg=22119.12, stdev=13107.96 00:17:29.328 clat percentiles (usec): 00:17:29.328 | 1.00th=[ 4752], 5.00th=[ 7242], 10.00th=[ 7898], 20.00th=[ 9765], 00:17:29.328 | 30.00th=[12649], 40.00th=[15401], 50.00th=[18482], 60.00th=[24249], 00:17:29.328 | 70.00th=[27395], 80.00th=[33424], 90.00th=[41681], 95.00th=[47973], 00:17:29.328 | 99.00th=[52691], 99.50th=[61080], 99.90th=[65799], 99.95th=[68682], 00:17:29.328 | 99.99th=[69731] 00:17:29.328 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:17:29.328 slat (nsec): min=1862, max=13416k, avg=106085.68, stdev=683344.20 00:17:29.328 clat (usec): min=1504, max=60980, avg=15304.57, stdev=7576.94 00:17:29.328 lat (usec): min=1728, max=60989, avg=15410.65, stdev=7614.75 00:17:29.328 clat percentiles (usec): 00:17:29.328 | 1.00th=[ 3261], 5.00th=[ 5735], 10.00th=[ 7111], 20.00th=[ 9110], 00:17:29.328 | 30.00th=[10028], 40.00th=[11207], 50.00th=[13566], 60.00th=[15795], 00:17:29.328 | 70.00th=[18482], 80.00th=[22414], 90.00th=[26870], 95.00th=[30278], 00:17:29.328 | 99.00th=[32375], 99.50th=[33817], 99.90th=[35390], 99.95th=[35914], 00:17:29.328 | 99.99th=[61080] 00:17:29.328 bw ( KiB/s): min=12288, max=16384, per=22.83%, avg=14336.00, stdev=2896.31, samples=2 00:17:29.328 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:17:29.328 lat (msec) : 2=0.33%, 4=0.65%, 10=24.88%, 20=38.90%, 50=34.33% 00:17:29.328 lat (msec) : 100=0.90% 00:17:29.328 cpu : usr=2.39%, sys=2.09%, ctx=475, majf=0, minf=1 00:17:29.328 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:17:29.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:29.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:29.328 issued rwts: total=3316,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:29.328 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:29.328 job2: (groupid=0, jobs=1): err= 0: pid=887079: Thu Jul 25 01:17:51 2024 00:17:29.328 read: IOPS=2023, BW=8095KiB/s (8289kB/s)(8192KiB/1012msec) 00:17:29.328 slat (nsec): min=1134, max=26972k, avg=234667.62, stdev=1791686.05 00:17:29.328 clat (usec): min=4710, max=69497, avg=30214.72, stdev=13374.08 00:17:29.328 lat (usec): min=4717, max=69514, avg=30449.38, stdev=13551.94 00:17:29.328 clat percentiles (usec): 00:17:29.328 | 1.00th=[ 4883], 5.00th=[14484], 10.00th=[15664], 20.00th=[16450], 00:17:29.328 | 30.00th=[18220], 40.00th=[21890], 50.00th=[28705], 60.00th=[38011], 00:17:29.328 | 70.00th=[40109], 80.00th=[43779], 90.00th=[46924], 95.00th=[51643], 00:17:29.328 | 99.00th=[55313], 99.50th=[56361], 99.90th=[66847], 99.95th=[66847], 00:17:29.328 | 99.99th=[69731] 00:17:29.328 write: IOPS=2088, BW=8356KiB/s (8556kB/s)(8456KiB/1012msec); 0 zone resets 00:17:29.328 slat (usec): min=2, max=18952, avg=225.79, stdev=1152.48 00:17:29.328 clat (usec): min=6728, max=76580, avg=29753.09, stdev=15144.39 00:17:29.328 lat (usec): min=7294, max=76588, avg=29978.88, stdev=15243.89 00:17:29.328 clat percentiles (usec): 00:17:29.328 | 1.00th=[ 7439], 5.00th=[13042], 10.00th=[13829], 20.00th=[17171], 00:17:29.328 | 30.00th=[19530], 40.00th=[22676], 50.00th=[24511], 60.00th=[28705], 00:17:29.328 | 70.00th=[33817], 80.00th=[40633], 90.00th=[53740], 95.00th=[63701], 00:17:29.328 | 99.00th=[67634], 99.50th=[73925], 99.90th=[77071], 99.95th=[77071], 00:17:29.328 | 99.99th=[77071] 00:17:29.328 bw ( KiB/s): min= 8192, max= 8192, per=13.05%, avg=8192.00, stdev= 0.00, samples=2 00:17:29.328 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:17:29.328 lat (msec) : 10=1.73%, 20=31.31%, 50=57.38%, 100=9.59% 00:17:29.328 cpu : usr=0.99%, sys=1.88%, ctx=312, majf=0, minf=1 00:17:29.328 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:17:29.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:29.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:29.328 issued rwts: total=2048,2114,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:29.328 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:29.328 job3: (groupid=0, jobs=1): err= 0: pid=887080: Thu Jul 25 01:17:51 2024 00:17:29.328 read: IOPS=5106, BW=19.9MiB/s (20.9MB/s)(20.1MiB/1006msec) 00:17:29.328 slat (nsec): min=1046, max=9853.9k, avg=78449.02, stdev=527897.51 00:17:29.328 clat (usec): min=1690, max=21258, avg=10603.76, stdev=2567.82 00:17:29.328 lat (usec): min=1698, max=21637, avg=10682.21, stdev=2597.08 00:17:29.328 clat percentiles (usec): 00:17:29.328 | 1.00th=[ 3982], 5.00th=[ 5866], 10.00th=[ 7373], 20.00th=[ 8979], 00:17:29.328 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10683], 60.00th=[11076], 00:17:29.328 | 70.00th=[11731], 80.00th=[12125], 90.00th=[13829], 95.00th=[14746], 00:17:29.328 | 99.00th=[17433], 99.50th=[17695], 99.90th=[20317], 99.95th=[21365], 00:17:29.328 | 99.99th=[21365] 00:17:29.328 write: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec); 0 zone resets 00:17:29.328 slat (nsec): min=1787, max=7287.9k, avg=93173.18, stdev=454540.96 00:17:29.328 clat (usec): min=967, max=31753, avg=12990.33, stdev=5418.38 00:17:29.328 lat (usec): min=978, max=31761, avg=13083.50, stdev=5443.02 00:17:29.328 clat percentiles (usec): 00:17:29.328 | 1.00th=[ 3589], 5.00th=[ 5211], 10.00th=[ 6980], 20.00th=[ 8586], 00:17:29.328 | 30.00th=[ 9765], 40.00th=[10945], 50.00th=[11994], 60.00th=[13304], 00:17:29.328 | 70.00th=[14877], 80.00th=[16909], 90.00th=[20579], 95.00th=[23987], 00:17:29.328 | 99.00th=[27657], 99.50th=[28967], 99.90th=[30802], 99.95th=[30802], 00:17:29.328 | 99.99th=[31851] 00:17:29.328 bw ( KiB/s): min=21008, max=23168, per=35.18%, avg=22088.00, stdev=1527.35, samples=2 00:17:29.328 iops : min= 5252, max= 5792, avg=5522.00, stdev=381.84, samples=2 00:17:29.328 lat (usec) : 1000=0.01% 00:17:29.328 lat (msec) : 2=0.33%, 4=0.92%, 10=33.61%, 20=59.28%, 50=5.85% 00:17:29.328 cpu : usr=2.39%, sys=4.28%, ctx=779, majf=0, minf=1 00:17:29.328 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:17:29.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:29.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:29.328 issued rwts: total=5137,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:29.329 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:29.329 00:17:29.329 Run status group 0 (all jobs): 00:17:29.329 READ: bw=56.3MiB/s (59.1MB/s), 8095KiB/s-19.9MiB/s (8289kB/s-20.9MB/s), io=57.0MiB (59.8MB), run=1006-1012msec 00:17:29.329 WRITE: bw=61.3MiB/s (64.3MB/s), 8356KiB/s-21.9MiB/s (8556kB/s-22.9MB/s), io=62.1MiB (65.1MB), run=1006-1012msec 00:17:29.329 00:17:29.329 Disk stats (read/write): 00:17:29.329 nvme0n1: ios=3921/4096, merge=0/0, ticks=51482/48302, in_queue=99784, util=99.70% 00:17:29.329 nvme0n2: ios=2583/2921, merge=0/0, ticks=36237/26714, in_queue=62951, util=98.78% 00:17:29.329 nvme0n3: ios=1552/1543, merge=0/0, ticks=26855/26039, in_queue=52894, util=96.88% 00:17:29.329 nvme0n4: ios=4650/4772, merge=0/0, ticks=35328/39059, in_queue=74387, util=99.27% 00:17:29.329 01:17:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:17:29.329 01:17:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=887226 00:17:29.329 01:17:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:17:29.329 01:17:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:17:29.329 [global] 00:17:29.329 thread=1 00:17:29.329 invalidate=1 00:17:29.329 rw=read 00:17:29.329 time_based=1 00:17:29.329 runtime=10 00:17:29.329 ioengine=libaio 00:17:29.329 direct=1 00:17:29.329 bs=4096 00:17:29.329 iodepth=1 00:17:29.329 norandommap=1 00:17:29.329 numjobs=1 00:17:29.329 00:17:29.329 [job0] 00:17:29.329 filename=/dev/nvme0n1 00:17:29.329 [job1] 00:17:29.329 filename=/dev/nvme0n2 00:17:29.329 [job2] 00:17:29.329 filename=/dev/nvme0n3 00:17:29.329 [job3] 00:17:29.329 filename=/dev/nvme0n4 00:17:29.329 Could not set queue depth (nvme0n1) 00:17:29.329 Could not set queue depth (nvme0n2) 00:17:29.329 Could not set queue depth (nvme0n3) 00:17:29.329 Could not set queue depth (nvme0n4) 00:17:29.587 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:29.587 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:29.587 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:29.587 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:29.587 fio-3.35 00:17:29.587 Starting 4 threads 00:17:32.171 01:17:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:17:32.430 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=7450624, buflen=4096 00:17:32.430 fio: pid=887449, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:32.430 01:17:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:17:32.430 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=1273856, buflen=4096 00:17:32.430 fio: pid=887448, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:32.430 01:17:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:32.430 01:17:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:17:32.689 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=17207296, buflen=4096 00:17:32.689 fio: pid=887446, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:32.689 01:17:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:32.689 01:17:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:17:32.948 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=21884928, buflen=4096 00:17:32.948 fio: pid=887447, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:32.948 01:17:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:32.948 01:17:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:17:32.948 00:17:32.948 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=887446: Thu Jul 25 01:17:55 2024 00:17:32.948 read: IOPS=1368, BW=5472KiB/s (5603kB/s)(16.4MiB/3071msec) 00:17:32.948 slat (usec): min=5, max=15254, avg=21.71, stdev=432.59 00:17:32.948 clat (usec): min=349, max=43003, avg=707.52, stdev=2864.57 00:17:32.948 lat (usec): min=356, max=43026, avg=729.24, stdev=2898.65 00:17:32.948 clat percentiles (usec): 00:17:32.948 | 1.00th=[ 396], 5.00th=[ 433], 10.00th=[ 441], 20.00th=[ 453], 00:17:32.948 | 30.00th=[ 465], 40.00th=[ 478], 50.00th=[ 490], 60.00th=[ 498], 00:17:32.948 | 70.00th=[ 510], 80.00th=[ 537], 90.00th=[ 627], 95.00th=[ 725], 00:17:32.948 | 99.00th=[ 848], 99.50th=[ 1254], 99.90th=[42206], 99.95th=[42730], 00:17:32.948 | 99.99th=[43254] 00:17:32.949 bw ( KiB/s): min= 88, max= 8040, per=35.21%, avg=5041.60, stdev=3582.45, samples=5 00:17:32.949 iops : min= 22, max= 2010, avg=1260.40, stdev=895.61, samples=5 00:17:32.949 lat (usec) : 500=61.09%, 750=34.91%, 1000=3.36% 00:17:32.949 lat (msec) : 2=0.14%, 50=0.48% 00:17:32.949 cpu : usr=0.46%, sys=1.40%, ctx=4210, majf=0, minf=1 00:17:32.949 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:32.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:32.949 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:32.949 issued rwts: total=4202,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:32.949 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:32.949 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=887447: Thu Jul 25 01:17:55 2024 00:17:32.949 read: IOPS=1638, BW=6552KiB/s (6709kB/s)(20.9MiB/3262msec) 00:17:32.949 slat (usec): min=3, max=21892, avg=21.41, stdev=506.36 00:17:32.949 clat (usec): min=454, max=4046, avg=587.05, stdev=126.09 00:17:32.949 lat (usec): min=462, max=22702, avg=608.47, stdev=532.95 00:17:32.949 clat percentiles (usec): 00:17:32.949 | 1.00th=[ 478], 5.00th=[ 490], 10.00th=[ 494], 20.00th=[ 506], 00:17:32.949 | 30.00th=[ 519], 40.00th=[ 529], 50.00th=[ 545], 60.00th=[ 562], 00:17:32.949 | 70.00th=[ 603], 80.00th=[ 652], 90.00th=[ 725], 95.00th=[ 816], 00:17:32.949 | 99.00th=[ 1045], 99.50th=[ 1074], 99.90th=[ 1205], 99.95th=[ 1270], 00:17:32.949 | 99.99th=[ 4047] 00:17:32.949 bw ( KiB/s): min= 4974, max= 7312, per=45.78%, avg=6554.33, stdev=824.41, samples=6 00:17:32.949 iops : min= 1243, max= 1828, avg=1638.50, stdev=206.29, samples=6 00:17:32.949 lat (usec) : 500=15.46%, 750=77.23%, 1000=5.45% 00:17:32.949 lat (msec) : 2=1.83%, 10=0.02% 00:17:32.949 cpu : usr=1.01%, sys=2.51%, ctx=5350, majf=0, minf=1 00:17:32.949 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:32.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:32.949 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:32.949 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:32.949 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:32.949 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=887448: Thu Jul 25 01:17:55 2024 00:17:32.949 read: IOPS=107, BW=429KiB/s (439kB/s)(1244KiB/2901msec) 00:17:32.949 slat (nsec): min=7263, max=31208, avg=11421.00, stdev=5643.54 00:17:32.949 clat (usec): min=463, max=44050, avg=9309.39, stdev=16839.94 00:17:32.949 lat (usec): min=471, max=44076, avg=9320.78, stdev=16844.74 00:17:32.949 clat percentiles (usec): 00:17:32.949 | 1.00th=[ 469], 5.00th=[ 494], 10.00th=[ 510], 20.00th=[ 578], 00:17:32.949 | 30.00th=[ 603], 40.00th=[ 627], 50.00th=[ 660], 60.00th=[ 709], 00:17:32.949 | 70.00th=[ 898], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:17:32.949 | 99.00th=[42206], 99.50th=[43779], 99.90th=[44303], 99.95th=[44303], 00:17:32.949 | 99.99th=[44303] 00:17:32.949 bw ( KiB/s): min= 96, max= 1232, per=3.36%, avg=481.60, stdev=541.83, samples=5 00:17:32.949 iops : min= 24, max= 308, avg=120.40, stdev=135.46, samples=5 00:17:32.949 lat (usec) : 500=8.33%, 750=54.49%, 1000=10.26% 00:17:32.949 lat (msec) : 2=5.77%, 50=20.83% 00:17:32.949 cpu : usr=0.00%, sys=0.31%, ctx=312, majf=0, minf=1 00:17:32.949 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:32.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:32.949 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:32.949 issued rwts: total=312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:32.949 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:32.949 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=887449: Thu Jul 25 01:17:55 2024 00:17:32.949 read: IOPS=676, BW=2703KiB/s (2768kB/s)(7276KiB/2692msec) 00:17:32.949 slat (nsec): min=5543, max=34581, avg=8379.36, stdev=2025.22 00:17:32.949 clat (usec): min=359, max=43024, avg=1468.85, stdev=5998.33 00:17:32.949 lat (usec): min=365, max=43049, avg=1477.22, stdev=5999.13 00:17:32.949 clat percentiles (usec): 00:17:32.949 | 1.00th=[ 383], 5.00th=[ 441], 10.00th=[ 498], 20.00th=[ 537], 00:17:32.949 | 30.00th=[ 553], 40.00th=[ 562], 50.00th=[ 570], 60.00th=[ 578], 00:17:32.949 | 70.00th=[ 586], 80.00th=[ 603], 90.00th=[ 709], 95.00th=[ 824], 00:17:32.949 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[43254], 00:17:32.949 | 99.99th=[43254] 00:17:32.949 bw ( KiB/s): min= 96, max= 6816, per=20.27%, avg=2902.40, stdev=3570.19, samples=5 00:17:32.949 iops : min= 24, max= 1704, avg=725.60, stdev=892.55, samples=5 00:17:32.949 lat (usec) : 500=10.88%, 750=81.04%, 1000=5.00% 00:17:32.949 lat (msec) : 2=0.88%, 50=2.14% 00:17:32.949 cpu : usr=0.45%, sys=1.00%, ctx=1820, majf=0, minf=2 00:17:32.949 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:32.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:32.949 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:32.949 issued rwts: total=1820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:32.949 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:32.949 00:17:32.949 Run status group 0 (all jobs): 00:17:32.949 READ: bw=14.0MiB/s (14.7MB/s), 429KiB/s-6552KiB/s (439kB/s-6709kB/s), io=45.6MiB (47.8MB), run=2692-3262msec 00:17:32.949 00:17:32.949 Disk stats (read/write): 00:17:32.949 nvme0n1: ios=3933/0, merge=0/0, ticks=3329/0, in_queue=3329, util=99.77% 00:17:32.949 nvme0n2: ios=5039/0, merge=0/0, ticks=2926/0, in_queue=2926, util=93.90% 00:17:32.949 nvme0n3: ios=309/0, merge=0/0, ticks=2811/0, in_queue=2811, util=96.52% 00:17:32.949 nvme0n4: ios=1816/0, merge=0/0, ticks=2530/0, in_queue=2530, util=96.48% 00:17:33.208 01:17:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:33.208 01:17:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:17:33.208 01:17:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:33.208 01:17:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:17:33.467 01:17:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:33.467 01:17:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:17:33.726 01:17:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:33.726 01:17:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:17:33.985 01:17:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:17:33.985 01:17:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 887226 00:17:33.985 01:17:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:17:33.985 01:17:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:33.985 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:33.985 01:17:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:33.985 01:17:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:17:33.985 01:17:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:33.985 01:17:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:33.985 01:17:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:33.985 01:17:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:33.985 01:17:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:17:33.985 01:17:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:17:33.985 01:17:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:17:33.985 nvmf hotplug test: fio failed as expected 00:17:33.985 01:17:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:34.244 01:17:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:17:34.244 01:17:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:17:34.244 01:17:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:17:34.244 01:17:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:17:34.244 01:17:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:17:34.244 01:17:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:34.244 01:17:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:17:34.244 01:17:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:34.244 01:17:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:17:34.244 01:17:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:34.244 01:17:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:34.244 rmmod nvme_tcp 00:17:34.244 rmmod nvme_fabrics 00:17:34.244 rmmod nvme_keyring 00:17:34.244 01:17:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:34.244 01:17:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:17:34.244 01:17:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:17:34.244 01:17:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 884387 ']' 00:17:34.244 01:17:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 884387 00:17:34.244 01:17:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 884387 ']' 00:17:34.244 01:17:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 884387 00:17:34.244 01:17:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:17:34.244 01:17:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:34.244 01:17:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 884387 00:17:34.244 01:17:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:34.244 01:17:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:34.244 01:17:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 884387' 00:17:34.244 killing process with pid 884387 00:17:34.244 01:17:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 884387 00:17:34.244 01:17:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 884387 00:17:34.503 01:17:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:34.503 01:17:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:34.503 01:17:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:34.503 01:17:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:34.503 01:17:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:34.503 01:17:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.503 01:17:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:34.503 01:17:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.042 01:17:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:37.042 00:17:37.042 real 0m25.913s 00:17:37.042 user 1m45.434s 00:17:37.042 sys 0m7.276s 00:17:37.042 01:17:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:37.042 01:17:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.042 ************************************ 00:17:37.042 END TEST nvmf_fio_target 00:17:37.042 ************************************ 00:17:37.042 01:17:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:37.042 01:17:58 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:37.042 01:17:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:37.042 01:17:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:37.042 01:17:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:37.042 ************************************ 00:17:37.042 START TEST nvmf_bdevio 00:17:37.042 ************************************ 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:37.042 * Looking for test storage... 00:17:37.042 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:17:37.042 01:17:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:41.234 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:41.234 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:17:41.234 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:41.234 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:41.234 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:41.234 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:41.234 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:41.234 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:17:41.234 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:41.234 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:17:41.234 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:17:41.234 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:17:41.234 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:17:41.234 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:17:41.234 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:17:41.234 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:41.234 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:41.234 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:41.234 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:41.234 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:41.234 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:41.234 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:41.234 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:41.234 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:41.234 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:41.234 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:41.234 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:41.234 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:41.234 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:41.234 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:41.234 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:41.234 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:41.234 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:41.234 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:41.234 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:41.234 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:41.234 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:41.234 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:41.234 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:41.234 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:41.234 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:41.234 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:41.234 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:41.234 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:41.234 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:41.234 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:41.493 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:41.493 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:41.493 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:41.493 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:41.493 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:41.493 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:41.493 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:41.493 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:41.493 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:41.493 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:41.493 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:41.493 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:41.493 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:41.493 Found net devices under 0000:86:00.0: cvl_0_0 00:17:41.493 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:41.493 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:41.493 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:41.493 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:41.493 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:41.493 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:41.493 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:41.493 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:41.493 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:41.493 Found net devices under 0000:86:00.1: cvl_0_1 00:17:41.493 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:41.493 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:41.493 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:17:41.493 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:41.493 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:41.493 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:41.493 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:41.493 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:41.493 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:41.493 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:41.493 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:41.493 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:41.493 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:41.493 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:41.493 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:41.493 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:41.493 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:41.493 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:41.493 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:41.493 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:41.494 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:41.494 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:41.494 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:41.494 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:41.494 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:41.494 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:41.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:41.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:17:41.494 00:17:41.494 --- 10.0.0.2 ping statistics --- 00:17:41.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.494 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:17:41.494 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:41.494 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:41.494 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.388 ms 00:17:41.494 00:17:41.494 --- 10.0.0.1 ping statistics --- 00:17:41.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.494 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:17:41.494 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:41.494 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:17:41.494 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:41.494 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:41.494 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:41.494 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:41.494 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:41.494 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:41.494 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:41.494 01:18:03 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:41.494 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:41.494 01:18:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:41.494 01:18:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:41.494 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=891650 00:17:41.494 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 891650 00:17:41.494 01:18:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 891650 ']' 00:17:41.494 01:18:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.494 01:18:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:41.494 01:18:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.494 01:18:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:41.494 01:18:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:41.494 01:18:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:41.753 [2024-07-25 01:18:04.027007] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:17:41.753 [2024-07-25 01:18:04.027056] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:41.753 EAL: No free 2048 kB hugepages reported on node 1 00:17:41.753 [2024-07-25 01:18:04.083797] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:41.753 [2024-07-25 01:18:04.163831] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:41.753 [2024-07-25 01:18:04.163867] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:41.753 [2024-07-25 01:18:04.163874] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:41.753 [2024-07-25 01:18:04.163881] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:41.753 [2024-07-25 01:18:04.163886] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:41.753 [2024-07-25 01:18:04.164000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:41.753 [2024-07-25 01:18:04.164108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:41.753 [2024-07-25 01:18:04.164214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:41.753 [2024-07-25 01:18:04.164215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:42.688 01:18:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:42.688 01:18:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:17:42.688 01:18:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:42.688 01:18:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:42.688 01:18:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:42.688 01:18:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:42.688 01:18:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:42.688 01:18:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.688 01:18:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:42.688 [2024-07-25 01:18:04.867175] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:42.688 01:18:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.688 01:18:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:42.688 01:18:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.688 01:18:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:42.688 Malloc0 00:17:42.688 01:18:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.688 01:18:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:42.688 01:18:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.688 01:18:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:42.688 01:18:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.688 01:18:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:42.688 01:18:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.688 01:18:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:42.688 01:18:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.688 01:18:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:42.688 01:18:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.688 01:18:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:42.688 [2024-07-25 01:18:04.910745] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:42.688 01:18:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.688 01:18:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:42.688 01:18:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:42.688 01:18:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:17:42.688 01:18:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:17:42.688 01:18:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:42.688 01:18:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:42.688 { 00:17:42.688 "params": { 00:17:42.688 "name": "Nvme$subsystem", 00:17:42.688 "trtype": "$TEST_TRANSPORT", 00:17:42.688 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:42.688 "adrfam": "ipv4", 00:17:42.688 "trsvcid": "$NVMF_PORT", 00:17:42.688 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:42.688 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:42.688 "hdgst": ${hdgst:-false}, 00:17:42.688 "ddgst": ${ddgst:-false} 00:17:42.688 }, 00:17:42.688 "method": "bdev_nvme_attach_controller" 00:17:42.688 } 00:17:42.688 EOF 00:17:42.688 )") 00:17:42.688 01:18:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:17:42.688 01:18:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:17:42.688 01:18:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:17:42.688 01:18:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:42.688 "params": { 00:17:42.688 "name": "Nvme1", 00:17:42.688 "trtype": "tcp", 00:17:42.688 "traddr": "10.0.0.2", 00:17:42.688 "adrfam": "ipv4", 00:17:42.688 "trsvcid": "4420", 00:17:42.688 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:42.688 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:42.688 "hdgst": false, 00:17:42.688 "ddgst": false 00:17:42.688 }, 00:17:42.688 "method": "bdev_nvme_attach_controller" 00:17:42.688 }' 00:17:42.688 [2024-07-25 01:18:04.959143] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:17:42.688 [2024-07-25 01:18:04.959188] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid891836 ] 00:17:42.688 EAL: No free 2048 kB hugepages reported on node 1 00:17:42.688 [2024-07-25 01:18:05.013126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:42.688 [2024-07-25 01:18:05.088635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:42.688 [2024-07-25 01:18:05.088653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:42.688 [2024-07-25 01:18:05.088655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.946 I/O targets: 00:17:42.946 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:42.946 00:17:42.946 00:17:42.946 CUnit - A unit testing framework for C - Version 2.1-3 00:17:42.946 http://cunit.sourceforge.net/ 00:17:42.946 00:17:42.946 00:17:42.946 Suite: bdevio tests on: Nvme1n1 00:17:42.946 Test: blockdev write read block ...passed 00:17:42.946 Test: blockdev write zeroes read block ...passed 00:17:42.946 Test: blockdev write zeroes read no split ...passed 00:17:43.205 Test: blockdev write zeroes read split ...passed 00:17:43.205 Test: blockdev write zeroes read split partial ...passed 00:17:43.205 Test: blockdev reset ...[2024-07-25 01:18:05.566865] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:43.205 [2024-07-25 01:18:05.566929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x192d6d0 (9): Bad file descriptor 00:17:43.463 [2024-07-25 01:18:05.713721] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:43.463 passed 00:17:43.463 Test: blockdev write read 8 blocks ...passed 00:17:43.463 Test: blockdev write read size > 128k ...passed 00:17:43.463 Test: blockdev write read invalid size ...passed 00:17:43.463 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:43.463 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:43.464 Test: blockdev write read max offset ...passed 00:17:43.464 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:43.464 Test: blockdev writev readv 8 blocks ...passed 00:17:43.464 Test: blockdev writev readv 30 x 1block ...passed 00:17:43.464 Test: blockdev writev readv block ...passed 00:17:43.464 Test: blockdev writev readv size > 128k ...passed 00:17:43.464 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:43.464 Test: blockdev comparev and writev ...[2024-07-25 01:18:05.949617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:43.464 [2024-07-25 01:18:05.949645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.464 [2024-07-25 01:18:05.949659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:43.464 [2024-07-25 01:18:05.949666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.464 [2024-07-25 01:18:05.950213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:43.464 [2024-07-25 01:18:05.950225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:43.464 [2024-07-25 01:18:05.950237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:43.464 [2024-07-25 01:18:05.950244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:43.464 [2024-07-25 01:18:05.950683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:43.464 [2024-07-25 01:18:05.950694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:43.464 [2024-07-25 01:18:05.950706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:43.464 [2024-07-25 01:18:05.950713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:43.464 [2024-07-25 01:18:05.951173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:43.464 [2024-07-25 01:18:05.951185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:43.464 [2024-07-25 01:18:05.951197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:43.464 [2024-07-25 01:18:05.951204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:43.723 passed 00:17:43.723 Test: blockdev nvme passthru rw ...passed 00:17:43.723 Test: blockdev nvme passthru vendor specific ...[2024-07-25 01:18:06.035925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:43.723 [2024-07-25 01:18:06.035940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:43.723 [2024-07-25 01:18:06.036310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:43.723 [2024-07-25 01:18:06.036321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:43.723 [2024-07-25 01:18:06.036688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:43.723 [2024-07-25 01:18:06.036699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:43.723 [2024-07-25 01:18:06.037071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:43.723 [2024-07-25 01:18:06.037085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:43.723 passed 00:17:43.723 Test: blockdev nvme admin passthru ...passed 00:17:43.723 Test: blockdev copy ...passed 00:17:43.723 00:17:43.723 Run Summary: Type Total Ran Passed Failed Inactive 00:17:43.723 suites 1 1 n/a 0 0 00:17:43.723 tests 23 23 23 0 0 00:17:43.723 asserts 152 152 152 0 n/a 00:17:43.723 00:17:43.723 Elapsed time = 1.527 seconds 00:17:43.983 01:18:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:43.983 01:18:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.983 01:18:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:43.983 01:18:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.983 01:18:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:43.983 01:18:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:17:43.983 01:18:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:43.983 01:18:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:17:43.983 01:18:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:43.983 01:18:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:17:43.983 01:18:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:43.983 01:18:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:43.983 rmmod nvme_tcp 00:17:43.983 rmmod nvme_fabrics 00:17:43.983 rmmod nvme_keyring 00:17:43.983 01:18:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:43.983 01:18:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:17:43.983 01:18:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:17:43.983 01:18:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 891650 ']' 00:17:43.983 01:18:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 891650 00:17:43.983 01:18:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 891650 ']' 00:17:43.983 01:18:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 891650 00:17:43.983 01:18:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:17:43.983 01:18:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:43.983 01:18:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 891650 00:17:43.983 01:18:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:17:43.983 01:18:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:17:43.983 01:18:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 891650' 00:17:43.983 killing process with pid 891650 00:17:43.983 01:18:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 891650 00:17:43.983 01:18:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 891650 00:17:44.242 01:18:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:44.242 01:18:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:44.242 01:18:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:44.242 01:18:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:44.242 01:18:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:44.242 01:18:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.242 01:18:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:44.242 01:18:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.151 01:18:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:46.151 00:17:46.151 real 0m9.621s 00:17:46.151 user 0m13.469s 00:17:46.151 sys 0m4.216s 00:17:46.151 01:18:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:46.151 01:18:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:46.151 ************************************ 00:17:46.151 END TEST nvmf_bdevio 00:17:46.151 ************************************ 00:17:46.411 01:18:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:46.411 01:18:08 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:46.411 01:18:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:46.411 01:18:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:46.411 01:18:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:46.411 ************************************ 00:17:46.411 START TEST nvmf_auth_target 00:17:46.411 ************************************ 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:46.411 * Looking for test storage... 00:17:46.411 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:46.411 01:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:51.692 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:51.692 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:51.692 Found net devices under 0000:86:00.0: cvl_0_0 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:51.692 Found net devices under 0000:86:00.1: cvl_0_1 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:51.692 01:18:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:51.692 01:18:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:51.692 01:18:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:51.692 01:18:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:51.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:51.692 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:17:51.692 00:17:51.692 --- 10.0.0.2 ping statistics --- 00:17:51.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.692 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:17:51.692 01:18:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:51.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:51.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:17:51.692 00:17:51.692 --- 10.0.0.1 ping statistics --- 00:17:51.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.692 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:17:51.692 01:18:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:51.692 01:18:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:17:51.692 01:18:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:51.692 01:18:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:51.692 01:18:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:51.692 01:18:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:51.692 01:18:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:51.692 01:18:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:51.692 01:18:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:51.692 01:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:17:51.692 01:18:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:51.692 01:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:51.692 01:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.692 01:18:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=895973 00:17:51.692 01:18:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 895973 00:17:51.692 01:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 895973 ']' 00:17:51.692 01:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.692 01:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:51.692 01:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.692 01:18:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:51.692 01:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:51.692 01:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.669 01:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:52.669 01:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:52.669 01:18:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:52.669 01:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:52.669 01:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=896014 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=50c122f3ef9954005f47b65d989fcfbddd55ea09a428f2be 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.aKe 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 50c122f3ef9954005f47b65d989fcfbddd55ea09a428f2be 0 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 50c122f3ef9954005f47b65d989fcfbddd55ea09a428f2be 0 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=50c122f3ef9954005f47b65d989fcfbddd55ea09a428f2be 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.aKe 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.aKe 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.aKe 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2ff4679b5b2a74a0ecdafd6a07d6f7f63ea5510a29005417320354b7087e1322 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.7yQ 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2ff4679b5b2a74a0ecdafd6a07d6f7f63ea5510a29005417320354b7087e1322 3 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2ff4679b5b2a74a0ecdafd6a07d6f7f63ea5510a29005417320354b7087e1322 3 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2ff4679b5b2a74a0ecdafd6a07d6f7f63ea5510a29005417320354b7087e1322 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.7yQ 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.7yQ 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.7yQ 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=38c0db16fd6e623dda80a10203c30a2e 00:17:52.669 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.wGr 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 38c0db16fd6e623dda80a10203c30a2e 1 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 38c0db16fd6e623dda80a10203c30a2e 1 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=38c0db16fd6e623dda80a10203c30a2e 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.wGr 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.wGr 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.wGr 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2ed6536658e9f788f84f2a2c35a3911a9942fc7ff2dca868 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.bvN 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2ed6536658e9f788f84f2a2c35a3911a9942fc7ff2dca868 2 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2ed6536658e9f788f84f2a2c35a3911a9942fc7ff2dca868 2 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2ed6536658e9f788f84f2a2c35a3911a9942fc7ff2dca868 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.bvN 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.bvN 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.bvN 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=fa1f7560024710c233ed66ffb2005d3548a62b421fac8684 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.bw5 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key fa1f7560024710c233ed66ffb2005d3548a62b421fac8684 2 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 fa1f7560024710c233ed66ffb2005d3548a62b421fac8684 2 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=fa1f7560024710c233ed66ffb2005d3548a62b421fac8684 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.bw5 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.bw5 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.bw5 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=aa6fbd5124d91be1f1e7612208713ba1 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.2hP 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key aa6fbd5124d91be1f1e7612208713ba1 1 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 aa6fbd5124d91be1f1e7612208713ba1 1 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=aa6fbd5124d91be1f1e7612208713ba1 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.2hP 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.2hP 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.2hP 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1698a3c7560dadf6145b755c41884213d5be2c2094bf48f2fa9f028796591eff 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.i6l 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1698a3c7560dadf6145b755c41884213d5be2c2094bf48f2fa9f028796591eff 3 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1698a3c7560dadf6145b755c41884213d5be2c2094bf48f2fa9f028796591eff 3 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1698a3c7560dadf6145b755c41884213d5be2c2094bf48f2fa9f028796591eff 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:52.929 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:53.189 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.i6l 00:17:53.189 01:18:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.i6l 00:17:53.189 01:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.i6l 00:17:53.189 01:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:17:53.189 01:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 895973 00:17:53.189 01:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 895973 ']' 00:17:53.189 01:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.189 01:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:53.189 01:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.189 01:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:53.189 01:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.189 01:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:53.189 01:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:53.189 01:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 896014 /var/tmp/host.sock 00:17:53.189 01:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 896014 ']' 00:17:53.189 01:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:17:53.189 01:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:53.189 01:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:53.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:53.189 01:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:53.189 01:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.448 01:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:53.448 01:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:53.448 01:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:17:53.448 01:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.448 01:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.448 01:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.448 01:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:53.448 01:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.aKe 00:17:53.449 01:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.449 01:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.449 01:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.449 01:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.aKe 00:17:53.449 01:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.aKe 00:17:53.708 01:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.7yQ ]] 00:17:53.708 01:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7yQ 00:17:53.708 01:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.708 01:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.708 01:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.708 01:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7yQ 00:17:53.708 01:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7yQ 00:17:53.708 01:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:53.708 01:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.wGr 00:17:53.708 01:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.708 01:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.708 01:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.708 01:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.wGr 00:17:53.708 01:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.wGr 00:17:53.967 01:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.bvN ]] 00:17:53.967 01:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.bvN 00:17:53.967 01:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.967 01:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.967 01:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.967 01:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.bvN 00:17:53.967 01:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.bvN 00:17:54.226 01:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:54.226 01:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.bw5 00:17:54.226 01:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.226 01:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.226 01:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.226 01:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.bw5 00:17:54.226 01:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.bw5 00:17:54.486 01:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.2hP ]] 00:17:54.486 01:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.2hP 00:17:54.486 01:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.486 01:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.486 01:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.486 01:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.2hP 00:17:54.486 01:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.2hP 00:17:54.486 01:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:54.486 01:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.i6l 00:17:54.486 01:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.486 01:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.486 01:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.486 01:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.i6l 00:17:54.486 01:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.i6l 00:17:54.745 01:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:17:54.745 01:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:54.745 01:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:54.745 01:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:54.745 01:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:54.745 01:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:55.004 01:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:17:55.004 01:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:55.004 01:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:55.004 01:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:55.004 01:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:55.004 01:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.004 01:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.004 01:18:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.004 01:18:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.004 01:18:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.004 01:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.004 01:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.264 00:17:55.264 01:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:55.264 01:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.264 01:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:55.264 01:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.264 01:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.264 01:18:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.264 01:18:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.264 01:18:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.264 01:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:55.264 { 00:17:55.264 "cntlid": 1, 00:17:55.264 "qid": 0, 00:17:55.264 "state": "enabled", 00:17:55.264 "thread": "nvmf_tgt_poll_group_000", 00:17:55.264 "listen_address": { 00:17:55.264 "trtype": "TCP", 00:17:55.264 "adrfam": "IPv4", 00:17:55.264 "traddr": "10.0.0.2", 00:17:55.264 "trsvcid": "4420" 00:17:55.264 }, 00:17:55.264 "peer_address": { 00:17:55.264 "trtype": "TCP", 00:17:55.264 "adrfam": "IPv4", 00:17:55.264 "traddr": "10.0.0.1", 00:17:55.264 "trsvcid": "34860" 00:17:55.264 }, 00:17:55.264 "auth": { 00:17:55.264 "state": "completed", 00:17:55.264 "digest": "sha256", 00:17:55.264 "dhgroup": "null" 00:17:55.264 } 00:17:55.264 } 00:17:55.264 ]' 00:17:55.264 01:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:55.523 01:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:55.523 01:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:55.523 01:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:55.523 01:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:55.523 01:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.523 01:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.523 01:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.782 01:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NTBjMTIyZjNlZjk5NTQwMDVmNDdiNjVkOTg5ZmNmYmRkZDU1ZWEwOWE0MjhmMmJlGDSRxw==: --dhchap-ctrl-secret DHHC-1:03:MmZmNDY3OWI1YjJhNzRhMGVjZGFmZDZhMDdkNmY3ZjYzZWE1NTEwYTI5MDA1NDE3MzIwMzU0YjcwODdlMTMyMiRVQho=: 00:17:56.349 01:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.349 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.349 01:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:56.349 01:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.349 01:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.349 01:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.349 01:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:56.349 01:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:56.349 01:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:56.349 01:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:17:56.349 01:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:56.349 01:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:56.350 01:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:56.350 01:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:56.350 01:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.350 01:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.350 01:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.350 01:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.350 01:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.350 01:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.350 01:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.609 00:17:56.609 01:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:56.609 01:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:56.609 01:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.867 01:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.867 01:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.867 01:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.867 01:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.867 01:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.867 01:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:56.867 { 00:17:56.867 "cntlid": 3, 00:17:56.867 "qid": 0, 00:17:56.867 "state": "enabled", 00:17:56.867 "thread": "nvmf_tgt_poll_group_000", 00:17:56.867 "listen_address": { 00:17:56.867 "trtype": "TCP", 00:17:56.867 "adrfam": "IPv4", 00:17:56.867 "traddr": "10.0.0.2", 00:17:56.867 "trsvcid": "4420" 00:17:56.867 }, 00:17:56.867 "peer_address": { 00:17:56.867 "trtype": "TCP", 00:17:56.867 "adrfam": "IPv4", 00:17:56.867 "traddr": "10.0.0.1", 00:17:56.867 "trsvcid": "34884" 00:17:56.867 }, 00:17:56.867 "auth": { 00:17:56.867 "state": "completed", 00:17:56.867 "digest": "sha256", 00:17:56.867 "dhgroup": "null" 00:17:56.867 } 00:17:56.867 } 00:17:56.867 ]' 00:17:56.867 01:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.867 01:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:56.867 01:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.867 01:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:56.867 01:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.867 01:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.867 01:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.867 01:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.125 01:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzhjMGRiMTZmZDZlNjIzZGRhODBhMTAyMDNjMzBhMmVshvmJ: --dhchap-ctrl-secret DHHC-1:02:MmVkNjUzNjY1OGU5Zjc4OGY4NGYyYTJjMzVhMzkxMWE5OTQyZmM3ZmYyZGNhODY4jOep9A==: 00:17:57.691 01:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.691 01:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:57.691 01:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.691 01:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.691 01:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.691 01:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:57.691 01:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:57.691 01:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:57.950 01:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:17:57.950 01:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:57.950 01:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:57.950 01:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:57.950 01:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:57.950 01:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.950 01:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.950 01:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.950 01:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.950 01:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.950 01:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.950 01:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.208 00:17:58.208 01:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:58.208 01:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:58.208 01:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.208 01:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.208 01:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.208 01:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.208 01:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.467 01:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.467 01:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.467 { 00:17:58.467 "cntlid": 5, 00:17:58.467 "qid": 0, 00:17:58.467 "state": "enabled", 00:17:58.467 "thread": "nvmf_tgt_poll_group_000", 00:17:58.467 "listen_address": { 00:17:58.467 "trtype": "TCP", 00:17:58.467 "adrfam": "IPv4", 00:17:58.467 "traddr": "10.0.0.2", 00:17:58.467 "trsvcid": "4420" 00:17:58.467 }, 00:17:58.467 "peer_address": { 00:17:58.467 "trtype": "TCP", 00:17:58.467 "adrfam": "IPv4", 00:17:58.467 "traddr": "10.0.0.1", 00:17:58.467 "trsvcid": "34896" 00:17:58.467 }, 00:17:58.467 "auth": { 00:17:58.467 "state": "completed", 00:17:58.467 "digest": "sha256", 00:17:58.467 "dhgroup": "null" 00:17:58.467 } 00:17:58.467 } 00:17:58.467 ]' 00:17:58.467 01:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:58.467 01:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:58.467 01:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:58.467 01:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:58.467 01:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:58.467 01:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.467 01:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.467 01:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.725 01:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZmExZjc1NjAwMjQ3MTBjMjMzZWQ2NmZmYjIwMDVkMzU0OGE2MmI0MjFmYWM4Njg0CMe35A==: --dhchap-ctrl-secret DHHC-1:01:YWE2ZmJkNTEyNGQ5MWJlMWYxZTc2MTIyMDg3MTNiYTGb5VQY: 00:17:59.292 01:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.292 01:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:59.292 01:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.292 01:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.292 01:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.292 01:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:59.292 01:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:59.292 01:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:59.292 01:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:17:59.292 01:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.293 01:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:59.293 01:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:59.293 01:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:59.293 01:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.293 01:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:59.293 01:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.293 01:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.293 01:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.293 01:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:59.293 01:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:59.551 00:17:59.551 01:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:59.551 01:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:59.551 01:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.810 01:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.810 01:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.810 01:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.810 01:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.810 01:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.810 01:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:59.810 { 00:17:59.810 "cntlid": 7, 00:17:59.810 "qid": 0, 00:17:59.810 "state": "enabled", 00:17:59.810 "thread": "nvmf_tgt_poll_group_000", 00:17:59.810 "listen_address": { 00:17:59.810 "trtype": "TCP", 00:17:59.810 "adrfam": "IPv4", 00:17:59.810 "traddr": "10.0.0.2", 00:17:59.810 "trsvcid": "4420" 00:17:59.810 }, 00:17:59.810 "peer_address": { 00:17:59.810 "trtype": "TCP", 00:17:59.810 "adrfam": "IPv4", 00:17:59.810 "traddr": "10.0.0.1", 00:17:59.810 "trsvcid": "34928" 00:17:59.810 }, 00:17:59.810 "auth": { 00:17:59.810 "state": "completed", 00:17:59.810 "digest": "sha256", 00:17:59.810 "dhgroup": "null" 00:17:59.810 } 00:17:59.810 } 00:17:59.810 ]' 00:17:59.810 01:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:59.810 01:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:59.810 01:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:59.810 01:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:59.810 01:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:59.810 01:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.810 01:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.810 01:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.068 01:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTY5OGEzYzc1NjBkYWRmNjE0NWI3NTVjNDE4ODQyMTNkNWJlMmMyMDk0YmY0OGYyZmE5ZjAyODc5NjU5MWVmZkZcKJM=: 00:18:00.636 01:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.636 01:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:00.636 01:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.636 01:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.636 01:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.636 01:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:00.636 01:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:00.636 01:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:00.636 01:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:00.894 01:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:18:00.894 01:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:00.894 01:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:00.894 01:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:00.894 01:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:00.894 01:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.894 01:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.894 01:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.894 01:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.894 01:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.894 01:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.894 01:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.152 00:18:01.152 01:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:01.152 01:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:01.152 01:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.410 01:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.410 01:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.410 01:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.410 01:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.410 01:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.410 01:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:01.410 { 00:18:01.410 "cntlid": 9, 00:18:01.410 "qid": 0, 00:18:01.410 "state": "enabled", 00:18:01.410 "thread": "nvmf_tgt_poll_group_000", 00:18:01.410 "listen_address": { 00:18:01.410 "trtype": "TCP", 00:18:01.410 "adrfam": "IPv4", 00:18:01.410 "traddr": "10.0.0.2", 00:18:01.410 "trsvcid": "4420" 00:18:01.410 }, 00:18:01.410 "peer_address": { 00:18:01.410 "trtype": "TCP", 00:18:01.410 "adrfam": "IPv4", 00:18:01.410 "traddr": "10.0.0.1", 00:18:01.410 "trsvcid": "34946" 00:18:01.410 }, 00:18:01.410 "auth": { 00:18:01.410 "state": "completed", 00:18:01.410 "digest": "sha256", 00:18:01.410 "dhgroup": "ffdhe2048" 00:18:01.410 } 00:18:01.410 } 00:18:01.410 ]' 00:18:01.410 01:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:01.410 01:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:01.410 01:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:01.410 01:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:01.410 01:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:01.410 01:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.410 01:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.410 01:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.669 01:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NTBjMTIyZjNlZjk5NTQwMDVmNDdiNjVkOTg5ZmNmYmRkZDU1ZWEwOWE0MjhmMmJlGDSRxw==: --dhchap-ctrl-secret DHHC-1:03:MmZmNDY3OWI1YjJhNzRhMGVjZGFmZDZhMDdkNmY3ZjYzZWE1NTEwYTI5MDA1NDE3MzIwMzU0YjcwODdlMTMyMiRVQho=: 00:18:02.238 01:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.238 01:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:02.238 01:18:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.238 01:18:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.238 01:18:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.238 01:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:02.238 01:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:02.238 01:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:02.238 01:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:18:02.238 01:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:02.238 01:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:02.238 01:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:02.238 01:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:02.238 01:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.238 01:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.238 01:18:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.238 01:18:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.238 01:18:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.238 01:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.238 01:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.497 00:18:02.497 01:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:02.497 01:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.497 01:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.757 01:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.757 01:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.757 01:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.757 01:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.757 01:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.757 01:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.757 { 00:18:02.757 "cntlid": 11, 00:18:02.757 "qid": 0, 00:18:02.757 "state": "enabled", 00:18:02.757 "thread": "nvmf_tgt_poll_group_000", 00:18:02.757 "listen_address": { 00:18:02.757 "trtype": "TCP", 00:18:02.757 "adrfam": "IPv4", 00:18:02.757 "traddr": "10.0.0.2", 00:18:02.757 "trsvcid": "4420" 00:18:02.757 }, 00:18:02.757 "peer_address": { 00:18:02.757 "trtype": "TCP", 00:18:02.757 "adrfam": "IPv4", 00:18:02.757 "traddr": "10.0.0.1", 00:18:02.757 "trsvcid": "34958" 00:18:02.757 }, 00:18:02.757 "auth": { 00:18:02.757 "state": "completed", 00:18:02.757 "digest": "sha256", 00:18:02.757 "dhgroup": "ffdhe2048" 00:18:02.757 } 00:18:02.757 } 00:18:02.757 ]' 00:18:02.757 01:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.757 01:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:02.757 01:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.757 01:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:02.757 01:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:03.016 01:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.016 01:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.016 01:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.016 01:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzhjMGRiMTZmZDZlNjIzZGRhODBhMTAyMDNjMzBhMmVshvmJ: --dhchap-ctrl-secret DHHC-1:02:MmVkNjUzNjY1OGU5Zjc4OGY4NGYyYTJjMzVhMzkxMWE5OTQyZmM3ZmYyZGNhODY4jOep9A==: 00:18:03.585 01:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.585 01:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:03.585 01:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.585 01:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.585 01:18:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.585 01:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.585 01:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:03.585 01:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:03.845 01:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:18:03.845 01:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:03.845 01:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:03.845 01:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:03.845 01:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:03.845 01:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.845 01:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.845 01:18:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.845 01:18:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.845 01:18:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.845 01:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.845 01:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.104 00:18:04.104 01:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:04.104 01:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:04.104 01:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.363 01:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.363 01:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.363 01:18:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.363 01:18:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.363 01:18:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.363 01:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:04.363 { 00:18:04.363 "cntlid": 13, 00:18:04.363 "qid": 0, 00:18:04.363 "state": "enabled", 00:18:04.363 "thread": "nvmf_tgt_poll_group_000", 00:18:04.363 "listen_address": { 00:18:04.363 "trtype": "TCP", 00:18:04.363 "adrfam": "IPv4", 00:18:04.363 "traddr": "10.0.0.2", 00:18:04.363 "trsvcid": "4420" 00:18:04.363 }, 00:18:04.363 "peer_address": { 00:18:04.363 "trtype": "TCP", 00:18:04.363 "adrfam": "IPv4", 00:18:04.363 "traddr": "10.0.0.1", 00:18:04.363 "trsvcid": "34984" 00:18:04.363 }, 00:18:04.363 "auth": { 00:18:04.363 "state": "completed", 00:18:04.363 "digest": "sha256", 00:18:04.363 "dhgroup": "ffdhe2048" 00:18:04.363 } 00:18:04.363 } 00:18:04.363 ]' 00:18:04.363 01:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:04.363 01:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:04.363 01:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:04.364 01:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:04.364 01:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:04.364 01:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.364 01:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.364 01:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.623 01:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZmExZjc1NjAwMjQ3MTBjMjMzZWQ2NmZmYjIwMDVkMzU0OGE2MmI0MjFmYWM4Njg0CMe35A==: --dhchap-ctrl-secret DHHC-1:01:YWE2ZmJkNTEyNGQ5MWJlMWYxZTc2MTIyMDg3MTNiYTGb5VQY: 00:18:05.192 01:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.192 01:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:05.192 01:18:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.192 01:18:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.192 01:18:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.192 01:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:05.192 01:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:05.192 01:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:05.192 01:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:18:05.192 01:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.192 01:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:05.192 01:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:05.192 01:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:05.192 01:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.192 01:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:05.192 01:18:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.192 01:18:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.192 01:18:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.192 01:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:05.192 01:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:05.451 00:18:05.451 01:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:05.451 01:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:05.451 01:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.710 01:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.710 01:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.710 01:18:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.710 01:18:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.710 01:18:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.710 01:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:05.710 { 00:18:05.710 "cntlid": 15, 00:18:05.710 "qid": 0, 00:18:05.710 "state": "enabled", 00:18:05.710 "thread": "nvmf_tgt_poll_group_000", 00:18:05.710 "listen_address": { 00:18:05.710 "trtype": "TCP", 00:18:05.710 "adrfam": "IPv4", 00:18:05.710 "traddr": "10.0.0.2", 00:18:05.710 "trsvcid": "4420" 00:18:05.710 }, 00:18:05.710 "peer_address": { 00:18:05.710 "trtype": "TCP", 00:18:05.710 "adrfam": "IPv4", 00:18:05.710 "traddr": "10.0.0.1", 00:18:05.710 "trsvcid": "52386" 00:18:05.710 }, 00:18:05.710 "auth": { 00:18:05.710 "state": "completed", 00:18:05.710 "digest": "sha256", 00:18:05.710 "dhgroup": "ffdhe2048" 00:18:05.710 } 00:18:05.710 } 00:18:05.710 ]' 00:18:05.710 01:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:05.710 01:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:05.710 01:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.710 01:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:05.710 01:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:05.969 01:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.969 01:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.969 01:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.969 01:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTY5OGEzYzc1NjBkYWRmNjE0NWI3NTVjNDE4ODQyMTNkNWJlMmMyMDk0YmY0OGYyZmE5ZjAyODc5NjU5MWVmZkZcKJM=: 00:18:06.569 01:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.569 01:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:06.569 01:18:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.569 01:18:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.569 01:18:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.569 01:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:06.569 01:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:06.569 01:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:06.569 01:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:06.829 01:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:18:06.829 01:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:06.829 01:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:06.829 01:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:06.829 01:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:06.829 01:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.829 01:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.829 01:18:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.829 01:18:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.829 01:18:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.829 01:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.829 01:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.088 00:18:07.088 01:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:07.089 01:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:07.089 01:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.089 01:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.089 01:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.089 01:18:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.089 01:18:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.089 01:18:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.089 01:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:07.089 { 00:18:07.089 "cntlid": 17, 00:18:07.089 "qid": 0, 00:18:07.089 "state": "enabled", 00:18:07.089 "thread": "nvmf_tgt_poll_group_000", 00:18:07.089 "listen_address": { 00:18:07.089 "trtype": "TCP", 00:18:07.089 "adrfam": "IPv4", 00:18:07.089 "traddr": "10.0.0.2", 00:18:07.089 "trsvcid": "4420" 00:18:07.089 }, 00:18:07.089 "peer_address": { 00:18:07.089 "trtype": "TCP", 00:18:07.089 "adrfam": "IPv4", 00:18:07.089 "traddr": "10.0.0.1", 00:18:07.089 "trsvcid": "52394" 00:18:07.089 }, 00:18:07.089 "auth": { 00:18:07.089 "state": "completed", 00:18:07.089 "digest": "sha256", 00:18:07.089 "dhgroup": "ffdhe3072" 00:18:07.089 } 00:18:07.089 } 00:18:07.089 ]' 00:18:07.089 01:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:07.348 01:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:07.348 01:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:07.348 01:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:07.348 01:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:07.348 01:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.348 01:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.348 01:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.607 01:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NTBjMTIyZjNlZjk5NTQwMDVmNDdiNjVkOTg5ZmNmYmRkZDU1ZWEwOWE0MjhmMmJlGDSRxw==: --dhchap-ctrl-secret DHHC-1:03:MmZmNDY3OWI1YjJhNzRhMGVjZGFmZDZhMDdkNmY3ZjYzZWE1NTEwYTI5MDA1NDE3MzIwMzU0YjcwODdlMTMyMiRVQho=: 00:18:08.175 01:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.175 01:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:08.175 01:18:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.175 01:18:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.175 01:18:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.175 01:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:08.175 01:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:08.176 01:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:08.176 01:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:18:08.176 01:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:08.176 01:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:08.176 01:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:08.176 01:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:08.176 01:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.176 01:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.176 01:18:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.176 01:18:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.176 01:18:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.176 01:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.176 01:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.435 00:18:08.435 01:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:08.435 01:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:08.435 01:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.694 01:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.694 01:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.694 01:18:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.694 01:18:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.694 01:18:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.694 01:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:08.694 { 00:18:08.694 "cntlid": 19, 00:18:08.694 "qid": 0, 00:18:08.694 "state": "enabled", 00:18:08.694 "thread": "nvmf_tgt_poll_group_000", 00:18:08.694 "listen_address": { 00:18:08.694 "trtype": "TCP", 00:18:08.694 "adrfam": "IPv4", 00:18:08.694 "traddr": "10.0.0.2", 00:18:08.694 "trsvcid": "4420" 00:18:08.694 }, 00:18:08.694 "peer_address": { 00:18:08.694 "trtype": "TCP", 00:18:08.694 "adrfam": "IPv4", 00:18:08.694 "traddr": "10.0.0.1", 00:18:08.694 "trsvcid": "52424" 00:18:08.694 }, 00:18:08.694 "auth": { 00:18:08.694 "state": "completed", 00:18:08.694 "digest": "sha256", 00:18:08.694 "dhgroup": "ffdhe3072" 00:18:08.694 } 00:18:08.694 } 00:18:08.694 ]' 00:18:08.694 01:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.694 01:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:08.694 01:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:08.694 01:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:08.694 01:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.694 01:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.694 01:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.694 01:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.953 01:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzhjMGRiMTZmZDZlNjIzZGRhODBhMTAyMDNjMzBhMmVshvmJ: --dhchap-ctrl-secret DHHC-1:02:MmVkNjUzNjY1OGU5Zjc4OGY4NGYyYTJjMzVhMzkxMWE5OTQyZmM3ZmYyZGNhODY4jOep9A==: 00:18:09.522 01:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.522 01:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:09.522 01:18:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.522 01:18:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.522 01:18:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.522 01:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:09.522 01:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:09.522 01:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:09.781 01:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:18:09.781 01:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:09.781 01:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:09.781 01:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:09.781 01:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:09.781 01:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.781 01:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.781 01:18:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.781 01:18:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.781 01:18:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.781 01:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.781 01:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.041 00:18:10.041 01:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:10.041 01:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:10.041 01:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.041 01:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.041 01:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.041 01:18:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.041 01:18:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.041 01:18:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.041 01:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.041 { 00:18:10.041 "cntlid": 21, 00:18:10.041 "qid": 0, 00:18:10.041 "state": "enabled", 00:18:10.041 "thread": "nvmf_tgt_poll_group_000", 00:18:10.041 "listen_address": { 00:18:10.041 "trtype": "TCP", 00:18:10.041 "adrfam": "IPv4", 00:18:10.041 "traddr": "10.0.0.2", 00:18:10.041 "trsvcid": "4420" 00:18:10.041 }, 00:18:10.041 "peer_address": { 00:18:10.041 "trtype": "TCP", 00:18:10.041 "adrfam": "IPv4", 00:18:10.041 "traddr": "10.0.0.1", 00:18:10.041 "trsvcid": "52438" 00:18:10.041 }, 00:18:10.041 "auth": { 00:18:10.041 "state": "completed", 00:18:10.041 "digest": "sha256", 00:18:10.041 "dhgroup": "ffdhe3072" 00:18:10.041 } 00:18:10.041 } 00:18:10.041 ]' 00:18:10.041 01:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.300 01:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:10.300 01:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.300 01:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:10.300 01:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.300 01:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.300 01:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.300 01:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.559 01:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZmExZjc1NjAwMjQ3MTBjMjMzZWQ2NmZmYjIwMDVkMzU0OGE2MmI0MjFmYWM4Njg0CMe35A==: --dhchap-ctrl-secret DHHC-1:01:YWE2ZmJkNTEyNGQ5MWJlMWYxZTc2MTIyMDg3MTNiYTGb5VQY: 00:18:11.126 01:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.126 01:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:11.126 01:18:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.126 01:18:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.126 01:18:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.126 01:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.126 01:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:11.126 01:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:11.126 01:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:18:11.126 01:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:11.126 01:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:11.126 01:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:11.126 01:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:11.126 01:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.126 01:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:11.126 01:18:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.126 01:18:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.126 01:18:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.126 01:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:11.126 01:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:11.385 00:18:11.385 01:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:11.385 01:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:11.385 01:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.644 01:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.644 01:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.644 01:18:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.644 01:18:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.644 01:18:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.644 01:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:11.644 { 00:18:11.644 "cntlid": 23, 00:18:11.644 "qid": 0, 00:18:11.644 "state": "enabled", 00:18:11.644 "thread": "nvmf_tgt_poll_group_000", 00:18:11.644 "listen_address": { 00:18:11.644 "trtype": "TCP", 00:18:11.644 "adrfam": "IPv4", 00:18:11.644 "traddr": "10.0.0.2", 00:18:11.644 "trsvcid": "4420" 00:18:11.644 }, 00:18:11.644 "peer_address": { 00:18:11.644 "trtype": "TCP", 00:18:11.644 "adrfam": "IPv4", 00:18:11.644 "traddr": "10.0.0.1", 00:18:11.644 "trsvcid": "52468" 00:18:11.644 }, 00:18:11.644 "auth": { 00:18:11.644 "state": "completed", 00:18:11.644 "digest": "sha256", 00:18:11.644 "dhgroup": "ffdhe3072" 00:18:11.644 } 00:18:11.644 } 00:18:11.644 ]' 00:18:11.644 01:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:11.644 01:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:11.644 01:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:11.644 01:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:11.644 01:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:11.644 01:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.644 01:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.644 01:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.913 01:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTY5OGEzYzc1NjBkYWRmNjE0NWI3NTVjNDE4ODQyMTNkNWJlMmMyMDk0YmY0OGYyZmE5ZjAyODc5NjU5MWVmZkZcKJM=: 00:18:12.481 01:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.481 01:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:12.481 01:18:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.481 01:18:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.481 01:18:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.481 01:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:12.481 01:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:12.481 01:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:12.481 01:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:12.740 01:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:18:12.740 01:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:12.740 01:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:12.740 01:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:12.740 01:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:12.740 01:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.740 01:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.740 01:18:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.740 01:18:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.740 01:18:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.740 01:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.740 01:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.000 00:18:13.000 01:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:13.000 01:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:13.000 01:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.000 01:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.000 01:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.000 01:18:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.000 01:18:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.000 01:18:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.000 01:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:13.000 { 00:18:13.000 "cntlid": 25, 00:18:13.000 "qid": 0, 00:18:13.000 "state": "enabled", 00:18:13.000 "thread": "nvmf_tgt_poll_group_000", 00:18:13.000 "listen_address": { 00:18:13.000 "trtype": "TCP", 00:18:13.000 "adrfam": "IPv4", 00:18:13.000 "traddr": "10.0.0.2", 00:18:13.000 "trsvcid": "4420" 00:18:13.000 }, 00:18:13.000 "peer_address": { 00:18:13.000 "trtype": "TCP", 00:18:13.000 "adrfam": "IPv4", 00:18:13.000 "traddr": "10.0.0.1", 00:18:13.000 "trsvcid": "52492" 00:18:13.000 }, 00:18:13.000 "auth": { 00:18:13.000 "state": "completed", 00:18:13.000 "digest": "sha256", 00:18:13.000 "dhgroup": "ffdhe4096" 00:18:13.000 } 00:18:13.000 } 00:18:13.000 ]' 00:18:13.000 01:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:13.259 01:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:13.259 01:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:13.259 01:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:13.259 01:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:13.259 01:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.259 01:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.259 01:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.259 01:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NTBjMTIyZjNlZjk5NTQwMDVmNDdiNjVkOTg5ZmNmYmRkZDU1ZWEwOWE0MjhmMmJlGDSRxw==: --dhchap-ctrl-secret DHHC-1:03:MmZmNDY3OWI1YjJhNzRhMGVjZGFmZDZhMDdkNmY3ZjYzZWE1NTEwYTI5MDA1NDE3MzIwMzU0YjcwODdlMTMyMiRVQho=: 00:18:13.827 01:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.827 01:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:13.827 01:18:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.827 01:18:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.827 01:18:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.827 01:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.827 01:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:13.827 01:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:14.086 01:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:18:14.086 01:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:14.086 01:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:14.086 01:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:14.086 01:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:14.086 01:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.086 01:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.086 01:18:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.086 01:18:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.086 01:18:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.086 01:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.086 01:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.345 00:18:14.345 01:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:14.345 01:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:14.346 01:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.605 01:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.605 01:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.605 01:18:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.605 01:18:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.605 01:18:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.605 01:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:14.605 { 00:18:14.605 "cntlid": 27, 00:18:14.605 "qid": 0, 00:18:14.605 "state": "enabled", 00:18:14.605 "thread": "nvmf_tgt_poll_group_000", 00:18:14.605 "listen_address": { 00:18:14.605 "trtype": "TCP", 00:18:14.605 "adrfam": "IPv4", 00:18:14.605 "traddr": "10.0.0.2", 00:18:14.605 "trsvcid": "4420" 00:18:14.605 }, 00:18:14.605 "peer_address": { 00:18:14.605 "trtype": "TCP", 00:18:14.605 "adrfam": "IPv4", 00:18:14.605 "traddr": "10.0.0.1", 00:18:14.605 "trsvcid": "49580" 00:18:14.605 }, 00:18:14.605 "auth": { 00:18:14.605 "state": "completed", 00:18:14.605 "digest": "sha256", 00:18:14.605 "dhgroup": "ffdhe4096" 00:18:14.605 } 00:18:14.605 } 00:18:14.605 ]' 00:18:14.605 01:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:14.605 01:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:14.605 01:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:14.605 01:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:14.605 01:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:14.605 01:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.605 01:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.605 01:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.864 01:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzhjMGRiMTZmZDZlNjIzZGRhODBhMTAyMDNjMzBhMmVshvmJ: --dhchap-ctrl-secret DHHC-1:02:MmVkNjUzNjY1OGU5Zjc4OGY4NGYyYTJjMzVhMzkxMWE5OTQyZmM3ZmYyZGNhODY4jOep9A==: 00:18:15.432 01:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.432 01:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:15.432 01:18:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.432 01:18:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.432 01:18:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.432 01:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:15.432 01:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:15.432 01:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:15.691 01:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:18:15.691 01:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.691 01:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:15.691 01:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:15.691 01:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:15.691 01:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.691 01:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.691 01:18:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.691 01:18:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.691 01:18:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.691 01:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.691 01:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.951 00:18:15.951 01:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:15.951 01:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.951 01:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.951 01:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.951 01:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.951 01:18:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.951 01:18:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.951 01:18:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.951 01:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:15.951 { 00:18:15.951 "cntlid": 29, 00:18:15.951 "qid": 0, 00:18:15.951 "state": "enabled", 00:18:15.951 "thread": "nvmf_tgt_poll_group_000", 00:18:15.951 "listen_address": { 00:18:15.951 "trtype": "TCP", 00:18:15.951 "adrfam": "IPv4", 00:18:15.951 "traddr": "10.0.0.2", 00:18:15.951 "trsvcid": "4420" 00:18:15.951 }, 00:18:15.951 "peer_address": { 00:18:15.951 "trtype": "TCP", 00:18:15.951 "adrfam": "IPv4", 00:18:15.951 "traddr": "10.0.0.1", 00:18:15.951 "trsvcid": "49608" 00:18:15.951 }, 00:18:15.951 "auth": { 00:18:15.951 "state": "completed", 00:18:15.951 "digest": "sha256", 00:18:15.951 "dhgroup": "ffdhe4096" 00:18:15.951 } 00:18:15.951 } 00:18:15.951 ]' 00:18:15.951 01:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:15.951 01:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:15.951 01:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.210 01:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:16.210 01:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.210 01:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.210 01:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.211 01:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.211 01:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZmExZjc1NjAwMjQ3MTBjMjMzZWQ2NmZmYjIwMDVkMzU0OGE2MmI0MjFmYWM4Njg0CMe35A==: --dhchap-ctrl-secret DHHC-1:01:YWE2ZmJkNTEyNGQ5MWJlMWYxZTc2MTIyMDg3MTNiYTGb5VQY: 00:18:16.779 01:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.779 01:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:16.779 01:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.779 01:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.779 01:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.779 01:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.779 01:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:16.779 01:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:17.038 01:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:18:17.038 01:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.038 01:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:17.038 01:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:17.038 01:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:17.038 01:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.038 01:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:17.038 01:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.038 01:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.038 01:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.038 01:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:17.038 01:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:17.297 00:18:17.297 01:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:17.297 01:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.297 01:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.557 01:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.557 01:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.557 01:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.557 01:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.557 01:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.557 01:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:17.557 { 00:18:17.557 "cntlid": 31, 00:18:17.557 "qid": 0, 00:18:17.557 "state": "enabled", 00:18:17.557 "thread": "nvmf_tgt_poll_group_000", 00:18:17.557 "listen_address": { 00:18:17.557 "trtype": "TCP", 00:18:17.557 "adrfam": "IPv4", 00:18:17.557 "traddr": "10.0.0.2", 00:18:17.557 "trsvcid": "4420" 00:18:17.557 }, 00:18:17.557 "peer_address": { 00:18:17.557 "trtype": "TCP", 00:18:17.557 "adrfam": "IPv4", 00:18:17.557 "traddr": "10.0.0.1", 00:18:17.557 "trsvcid": "49630" 00:18:17.557 }, 00:18:17.557 "auth": { 00:18:17.557 "state": "completed", 00:18:17.557 "digest": "sha256", 00:18:17.557 "dhgroup": "ffdhe4096" 00:18:17.557 } 00:18:17.557 } 00:18:17.557 ]' 00:18:17.557 01:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:17.557 01:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:17.557 01:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:17.557 01:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:17.557 01:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:17.557 01:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.557 01:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.557 01:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.816 01:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTY5OGEzYzc1NjBkYWRmNjE0NWI3NTVjNDE4ODQyMTNkNWJlMmMyMDk0YmY0OGYyZmE5ZjAyODc5NjU5MWVmZkZcKJM=: 00:18:18.385 01:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.385 01:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:18.385 01:18:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.385 01:18:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.385 01:18:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.385 01:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:18.385 01:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:18.385 01:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:18.385 01:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:18.644 01:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:18:18.644 01:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:18.644 01:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:18.644 01:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:18.644 01:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:18.644 01:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.644 01:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.644 01:18:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.644 01:18:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.644 01:18:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.644 01:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.644 01:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.903 00:18:18.903 01:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:18.903 01:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:18.903 01:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.163 01:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.163 01:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.163 01:18:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.163 01:18:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.163 01:18:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.163 01:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:19.163 { 00:18:19.163 "cntlid": 33, 00:18:19.163 "qid": 0, 00:18:19.163 "state": "enabled", 00:18:19.163 "thread": "nvmf_tgt_poll_group_000", 00:18:19.163 "listen_address": { 00:18:19.163 "trtype": "TCP", 00:18:19.163 "adrfam": "IPv4", 00:18:19.163 "traddr": "10.0.0.2", 00:18:19.163 "trsvcid": "4420" 00:18:19.163 }, 00:18:19.163 "peer_address": { 00:18:19.163 "trtype": "TCP", 00:18:19.163 "adrfam": "IPv4", 00:18:19.163 "traddr": "10.0.0.1", 00:18:19.163 "trsvcid": "49662" 00:18:19.163 }, 00:18:19.163 "auth": { 00:18:19.163 "state": "completed", 00:18:19.163 "digest": "sha256", 00:18:19.163 "dhgroup": "ffdhe6144" 00:18:19.163 } 00:18:19.163 } 00:18:19.163 ]' 00:18:19.163 01:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:19.163 01:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:19.163 01:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:19.163 01:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:19.163 01:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:19.163 01:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.163 01:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.163 01:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.422 01:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NTBjMTIyZjNlZjk5NTQwMDVmNDdiNjVkOTg5ZmNmYmRkZDU1ZWEwOWE0MjhmMmJlGDSRxw==: --dhchap-ctrl-secret DHHC-1:03:MmZmNDY3OWI1YjJhNzRhMGVjZGFmZDZhMDdkNmY3ZjYzZWE1NTEwYTI5MDA1NDE3MzIwMzU0YjcwODdlMTMyMiRVQho=: 00:18:19.989 01:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.989 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.989 01:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:19.989 01:18:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.989 01:18:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.989 01:18:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.989 01:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:19.989 01:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:19.989 01:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:19.989 01:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:18:19.989 01:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:19.989 01:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:19.989 01:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:19.989 01:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:19.989 01:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.989 01:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.989 01:18:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.989 01:18:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.989 01:18:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.989 01:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.989 01:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.560 00:18:20.560 01:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:20.560 01:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:20.560 01:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.560 01:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.560 01:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.560 01:18:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.560 01:18:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.560 01:18:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.560 01:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:20.560 { 00:18:20.560 "cntlid": 35, 00:18:20.560 "qid": 0, 00:18:20.560 "state": "enabled", 00:18:20.560 "thread": "nvmf_tgt_poll_group_000", 00:18:20.560 "listen_address": { 00:18:20.560 "trtype": "TCP", 00:18:20.560 "adrfam": "IPv4", 00:18:20.560 "traddr": "10.0.0.2", 00:18:20.560 "trsvcid": "4420" 00:18:20.560 }, 00:18:20.560 "peer_address": { 00:18:20.560 "trtype": "TCP", 00:18:20.560 "adrfam": "IPv4", 00:18:20.560 "traddr": "10.0.0.1", 00:18:20.560 "trsvcid": "49704" 00:18:20.560 }, 00:18:20.560 "auth": { 00:18:20.560 "state": "completed", 00:18:20.560 "digest": "sha256", 00:18:20.560 "dhgroup": "ffdhe6144" 00:18:20.560 } 00:18:20.560 } 00:18:20.560 ]' 00:18:20.560 01:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:20.560 01:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:20.560 01:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:20.820 01:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:20.820 01:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:20.820 01:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.820 01:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.820 01:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.820 01:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzhjMGRiMTZmZDZlNjIzZGRhODBhMTAyMDNjMzBhMmVshvmJ: --dhchap-ctrl-secret DHHC-1:02:MmVkNjUzNjY1OGU5Zjc4OGY4NGYyYTJjMzVhMzkxMWE5OTQyZmM3ZmYyZGNhODY4jOep9A==: 00:18:21.387 01:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.387 01:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:21.387 01:18:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.387 01:18:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.387 01:18:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.387 01:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.387 01:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:21.387 01:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:21.646 01:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:18:21.646 01:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.646 01:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:21.646 01:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:21.646 01:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:21.646 01:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.646 01:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.646 01:18:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.646 01:18:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.646 01:18:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.646 01:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.646 01:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.905 00:18:21.905 01:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.905 01:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.905 01:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.165 01:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.165 01:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.165 01:18:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.165 01:18:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.165 01:18:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.165 01:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:22.165 { 00:18:22.165 "cntlid": 37, 00:18:22.165 "qid": 0, 00:18:22.165 "state": "enabled", 00:18:22.165 "thread": "nvmf_tgt_poll_group_000", 00:18:22.165 "listen_address": { 00:18:22.165 "trtype": "TCP", 00:18:22.165 "adrfam": "IPv4", 00:18:22.165 "traddr": "10.0.0.2", 00:18:22.165 "trsvcid": "4420" 00:18:22.165 }, 00:18:22.165 "peer_address": { 00:18:22.165 "trtype": "TCP", 00:18:22.165 "adrfam": "IPv4", 00:18:22.165 "traddr": "10.0.0.1", 00:18:22.165 "trsvcid": "49724" 00:18:22.165 }, 00:18:22.165 "auth": { 00:18:22.165 "state": "completed", 00:18:22.165 "digest": "sha256", 00:18:22.165 "dhgroup": "ffdhe6144" 00:18:22.165 } 00:18:22.165 } 00:18:22.165 ]' 00:18:22.165 01:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:22.165 01:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:22.165 01:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.424 01:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:22.424 01:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.424 01:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.424 01:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.424 01:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.424 01:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZmExZjc1NjAwMjQ3MTBjMjMzZWQ2NmZmYjIwMDVkMzU0OGE2MmI0MjFmYWM4Njg0CMe35A==: --dhchap-ctrl-secret DHHC-1:01:YWE2ZmJkNTEyNGQ5MWJlMWYxZTc2MTIyMDg3MTNiYTGb5VQY: 00:18:22.991 01:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.992 01:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:22.992 01:18:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.992 01:18:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.992 01:18:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.992 01:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:22.992 01:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:22.992 01:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:23.251 01:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:18:23.251 01:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:23.251 01:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:23.251 01:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:23.251 01:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:23.251 01:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.251 01:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:23.251 01:18:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.251 01:18:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.251 01:18:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.251 01:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:23.251 01:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:23.511 00:18:23.511 01:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.511 01:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:23.511 01:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.770 01:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.770 01:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.770 01:18:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.770 01:18:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.770 01:18:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.770 01:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:23.770 { 00:18:23.770 "cntlid": 39, 00:18:23.770 "qid": 0, 00:18:23.770 "state": "enabled", 00:18:23.770 "thread": "nvmf_tgt_poll_group_000", 00:18:23.770 "listen_address": { 00:18:23.770 "trtype": "TCP", 00:18:23.770 "adrfam": "IPv4", 00:18:23.770 "traddr": "10.0.0.2", 00:18:23.770 "trsvcid": "4420" 00:18:23.770 }, 00:18:23.770 "peer_address": { 00:18:23.770 "trtype": "TCP", 00:18:23.770 "adrfam": "IPv4", 00:18:23.770 "traddr": "10.0.0.1", 00:18:23.770 "trsvcid": "49744" 00:18:23.770 }, 00:18:23.770 "auth": { 00:18:23.770 "state": "completed", 00:18:23.770 "digest": "sha256", 00:18:23.770 "dhgroup": "ffdhe6144" 00:18:23.770 } 00:18:23.770 } 00:18:23.770 ]' 00:18:23.770 01:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:23.770 01:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:23.770 01:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:23.770 01:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:23.770 01:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:23.770 01:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.770 01:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.770 01:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.029 01:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTY5OGEzYzc1NjBkYWRmNjE0NWI3NTVjNDE4ODQyMTNkNWJlMmMyMDk0YmY0OGYyZmE5ZjAyODc5NjU5MWVmZkZcKJM=: 00:18:24.597 01:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.597 01:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:24.597 01:18:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.597 01:18:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.597 01:18:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.597 01:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:24.597 01:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:24.597 01:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:24.597 01:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:24.857 01:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:18:24.857 01:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:24.857 01:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:24.857 01:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:24.857 01:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:24.857 01:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.857 01:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.857 01:18:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.857 01:18:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.857 01:18:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.857 01:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.857 01:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.425 00:18:25.425 01:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.425 01:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:25.425 01:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.425 01:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.425 01:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.425 01:18:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.425 01:18:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.425 01:18:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.425 01:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:25.425 { 00:18:25.425 "cntlid": 41, 00:18:25.425 "qid": 0, 00:18:25.425 "state": "enabled", 00:18:25.425 "thread": "nvmf_tgt_poll_group_000", 00:18:25.425 "listen_address": { 00:18:25.425 "trtype": "TCP", 00:18:25.425 "adrfam": "IPv4", 00:18:25.425 "traddr": "10.0.0.2", 00:18:25.425 "trsvcid": "4420" 00:18:25.425 }, 00:18:25.425 "peer_address": { 00:18:25.425 "trtype": "TCP", 00:18:25.425 "adrfam": "IPv4", 00:18:25.425 "traddr": "10.0.0.1", 00:18:25.425 "trsvcid": "49880" 00:18:25.425 }, 00:18:25.425 "auth": { 00:18:25.425 "state": "completed", 00:18:25.425 "digest": "sha256", 00:18:25.425 "dhgroup": "ffdhe8192" 00:18:25.425 } 00:18:25.425 } 00:18:25.425 ]' 00:18:25.425 01:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:25.425 01:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:25.425 01:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:25.684 01:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:25.684 01:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:25.684 01:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.684 01:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.684 01:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.684 01:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NTBjMTIyZjNlZjk5NTQwMDVmNDdiNjVkOTg5ZmNmYmRkZDU1ZWEwOWE0MjhmMmJlGDSRxw==: --dhchap-ctrl-secret DHHC-1:03:MmZmNDY3OWI1YjJhNzRhMGVjZGFmZDZhMDdkNmY3ZjYzZWE1NTEwYTI5MDA1NDE3MzIwMzU0YjcwODdlMTMyMiRVQho=: 00:18:26.253 01:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.253 01:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:26.253 01:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.253 01:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.253 01:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.253 01:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:26.253 01:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:26.253 01:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:26.512 01:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:18:26.512 01:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:26.512 01:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:26.512 01:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:26.512 01:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:26.512 01:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.512 01:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.512 01:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.512 01:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.512 01:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.512 01:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.513 01:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.079 00:18:27.079 01:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:27.079 01:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:27.079 01:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.338 01:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.338 01:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.338 01:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.338 01:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.338 01:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.338 01:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:27.338 { 00:18:27.338 "cntlid": 43, 00:18:27.338 "qid": 0, 00:18:27.338 "state": "enabled", 00:18:27.338 "thread": "nvmf_tgt_poll_group_000", 00:18:27.338 "listen_address": { 00:18:27.338 "trtype": "TCP", 00:18:27.338 "adrfam": "IPv4", 00:18:27.338 "traddr": "10.0.0.2", 00:18:27.338 "trsvcid": "4420" 00:18:27.338 }, 00:18:27.338 "peer_address": { 00:18:27.338 "trtype": "TCP", 00:18:27.338 "adrfam": "IPv4", 00:18:27.338 "traddr": "10.0.0.1", 00:18:27.338 "trsvcid": "49898" 00:18:27.338 }, 00:18:27.338 "auth": { 00:18:27.338 "state": "completed", 00:18:27.338 "digest": "sha256", 00:18:27.338 "dhgroup": "ffdhe8192" 00:18:27.338 } 00:18:27.338 } 00:18:27.338 ]' 00:18:27.338 01:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:27.338 01:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:27.338 01:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:27.338 01:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:27.338 01:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:27.338 01:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.338 01:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.338 01:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.597 01:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzhjMGRiMTZmZDZlNjIzZGRhODBhMTAyMDNjMzBhMmVshvmJ: --dhchap-ctrl-secret DHHC-1:02:MmVkNjUzNjY1OGU5Zjc4OGY4NGYyYTJjMzVhMzkxMWE5OTQyZmM3ZmYyZGNhODY4jOep9A==: 00:18:28.167 01:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.167 01:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:28.167 01:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.167 01:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.167 01:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.167 01:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:28.167 01:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:28.167 01:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:28.167 01:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:18:28.167 01:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.167 01:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:28.167 01:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:28.167 01:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:28.167 01:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.167 01:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.167 01:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.167 01:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.167 01:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.167 01:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.167 01:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.736 00:18:28.736 01:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:28.736 01:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:28.736 01:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.996 01:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.996 01:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.996 01:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.996 01:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.996 01:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.996 01:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:28.996 { 00:18:28.996 "cntlid": 45, 00:18:28.996 "qid": 0, 00:18:28.996 "state": "enabled", 00:18:28.996 "thread": "nvmf_tgt_poll_group_000", 00:18:28.996 "listen_address": { 00:18:28.996 "trtype": "TCP", 00:18:28.996 "adrfam": "IPv4", 00:18:28.996 "traddr": "10.0.0.2", 00:18:28.996 "trsvcid": "4420" 00:18:28.996 }, 00:18:28.996 "peer_address": { 00:18:28.996 "trtype": "TCP", 00:18:28.996 "adrfam": "IPv4", 00:18:28.996 "traddr": "10.0.0.1", 00:18:28.996 "trsvcid": "49922" 00:18:28.996 }, 00:18:28.996 "auth": { 00:18:28.996 "state": "completed", 00:18:28.996 "digest": "sha256", 00:18:28.996 "dhgroup": "ffdhe8192" 00:18:28.996 } 00:18:28.996 } 00:18:28.996 ]' 00:18:28.996 01:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:28.996 01:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:28.996 01:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:28.996 01:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:28.996 01:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:28.996 01:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.996 01:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.996 01:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.255 01:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZmExZjc1NjAwMjQ3MTBjMjMzZWQ2NmZmYjIwMDVkMzU0OGE2MmI0MjFmYWM4Njg0CMe35A==: --dhchap-ctrl-secret DHHC-1:01:YWE2ZmJkNTEyNGQ5MWJlMWYxZTc2MTIyMDg3MTNiYTGb5VQY: 00:18:29.823 01:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.823 01:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:29.823 01:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.823 01:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.823 01:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.823 01:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:29.823 01:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:29.823 01:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:29.823 01:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:18:29.823 01:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:29.823 01:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:29.823 01:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:29.823 01:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:29.823 01:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.823 01:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:29.823 01:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.823 01:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.823 01:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.823 01:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:29.823 01:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:30.391 00:18:30.391 01:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:30.391 01:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:30.391 01:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.650 01:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.650 01:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.650 01:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.650 01:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.650 01:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.650 01:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:30.650 { 00:18:30.650 "cntlid": 47, 00:18:30.650 "qid": 0, 00:18:30.650 "state": "enabled", 00:18:30.650 "thread": "nvmf_tgt_poll_group_000", 00:18:30.650 "listen_address": { 00:18:30.650 "trtype": "TCP", 00:18:30.650 "adrfam": "IPv4", 00:18:30.650 "traddr": "10.0.0.2", 00:18:30.650 "trsvcid": "4420" 00:18:30.650 }, 00:18:30.650 "peer_address": { 00:18:30.650 "trtype": "TCP", 00:18:30.650 "adrfam": "IPv4", 00:18:30.650 "traddr": "10.0.0.1", 00:18:30.650 "trsvcid": "49950" 00:18:30.650 }, 00:18:30.650 "auth": { 00:18:30.650 "state": "completed", 00:18:30.650 "digest": "sha256", 00:18:30.650 "dhgroup": "ffdhe8192" 00:18:30.650 } 00:18:30.650 } 00:18:30.650 ]' 00:18:30.650 01:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:30.650 01:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:30.650 01:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:30.650 01:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:30.650 01:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:30.650 01:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.650 01:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.650 01:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.909 01:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTY5OGEzYzc1NjBkYWRmNjE0NWI3NTVjNDE4ODQyMTNkNWJlMmMyMDk0YmY0OGYyZmE5ZjAyODc5NjU5MWVmZkZcKJM=: 00:18:31.476 01:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.476 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.476 01:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:31.476 01:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.476 01:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.476 01:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.476 01:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:31.476 01:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:31.476 01:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:31.476 01:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:31.476 01:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:31.735 01:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:18:31.735 01:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.735 01:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:31.735 01:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:31.735 01:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:31.735 01:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.735 01:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.735 01:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.735 01:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.735 01:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.735 01:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.735 01:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.735 00:18:31.735 01:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:31.735 01:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:31.735 01:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.994 01:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.994 01:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.994 01:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.994 01:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.994 01:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.994 01:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:31.994 { 00:18:31.994 "cntlid": 49, 00:18:31.994 "qid": 0, 00:18:31.994 "state": "enabled", 00:18:31.994 "thread": "nvmf_tgt_poll_group_000", 00:18:31.994 "listen_address": { 00:18:31.994 "trtype": "TCP", 00:18:31.994 "adrfam": "IPv4", 00:18:31.994 "traddr": "10.0.0.2", 00:18:31.994 "trsvcid": "4420" 00:18:31.994 }, 00:18:31.994 "peer_address": { 00:18:31.994 "trtype": "TCP", 00:18:31.994 "adrfam": "IPv4", 00:18:31.994 "traddr": "10.0.0.1", 00:18:31.994 "trsvcid": "49966" 00:18:31.994 }, 00:18:31.994 "auth": { 00:18:31.994 "state": "completed", 00:18:31.994 "digest": "sha384", 00:18:31.994 "dhgroup": "null" 00:18:31.994 } 00:18:31.994 } 00:18:31.994 ]' 00:18:31.994 01:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:31.994 01:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:31.994 01:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.253 01:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:32.253 01:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:32.253 01:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.253 01:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.253 01:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.253 01:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NTBjMTIyZjNlZjk5NTQwMDVmNDdiNjVkOTg5ZmNmYmRkZDU1ZWEwOWE0MjhmMmJlGDSRxw==: --dhchap-ctrl-secret DHHC-1:03:MmZmNDY3OWI1YjJhNzRhMGVjZGFmZDZhMDdkNmY3ZjYzZWE1NTEwYTI5MDA1NDE3MzIwMzU0YjcwODdlMTMyMiRVQho=: 00:18:32.822 01:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.822 01:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:32.822 01:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.822 01:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.822 01:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.822 01:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:32.822 01:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:32.822 01:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:33.081 01:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:18:33.081 01:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:33.081 01:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:33.081 01:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:33.081 01:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:33.081 01:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.081 01:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.082 01:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.082 01:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.082 01:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.082 01:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.082 01:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.341 00:18:33.341 01:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:33.341 01:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:33.341 01:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.601 01:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.601 01:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.601 01:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.601 01:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.601 01:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.601 01:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:33.601 { 00:18:33.601 "cntlid": 51, 00:18:33.601 "qid": 0, 00:18:33.601 "state": "enabled", 00:18:33.601 "thread": "nvmf_tgt_poll_group_000", 00:18:33.601 "listen_address": { 00:18:33.601 "trtype": "TCP", 00:18:33.601 "adrfam": "IPv4", 00:18:33.601 "traddr": "10.0.0.2", 00:18:33.601 "trsvcid": "4420" 00:18:33.601 }, 00:18:33.601 "peer_address": { 00:18:33.601 "trtype": "TCP", 00:18:33.601 "adrfam": "IPv4", 00:18:33.601 "traddr": "10.0.0.1", 00:18:33.601 "trsvcid": "50000" 00:18:33.601 }, 00:18:33.601 "auth": { 00:18:33.601 "state": "completed", 00:18:33.601 "digest": "sha384", 00:18:33.601 "dhgroup": "null" 00:18:33.601 } 00:18:33.601 } 00:18:33.601 ]' 00:18:33.601 01:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:33.601 01:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:33.601 01:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:33.601 01:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:33.601 01:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:33.601 01:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.601 01:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.601 01:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.860 01:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzhjMGRiMTZmZDZlNjIzZGRhODBhMTAyMDNjMzBhMmVshvmJ: --dhchap-ctrl-secret DHHC-1:02:MmVkNjUzNjY1OGU5Zjc4OGY4NGYyYTJjMzVhMzkxMWE5OTQyZmM3ZmYyZGNhODY4jOep9A==: 00:18:34.428 01:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.428 01:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:34.428 01:18:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.428 01:18:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.428 01:18:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.428 01:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:34.428 01:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:34.428 01:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:34.428 01:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:18:34.428 01:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:34.428 01:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:34.428 01:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:34.428 01:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:34.428 01:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.428 01:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.428 01:18:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.428 01:18:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.428 01:18:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.428 01:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.428 01:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.688 00:18:34.688 01:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:34.688 01:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:34.688 01:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.982 01:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.982 01:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.982 01:18:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.982 01:18:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.982 01:18:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.982 01:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:34.982 { 00:18:34.982 "cntlid": 53, 00:18:34.982 "qid": 0, 00:18:34.982 "state": "enabled", 00:18:34.982 "thread": "nvmf_tgt_poll_group_000", 00:18:34.982 "listen_address": { 00:18:34.982 "trtype": "TCP", 00:18:34.982 "adrfam": "IPv4", 00:18:34.982 "traddr": "10.0.0.2", 00:18:34.982 "trsvcid": "4420" 00:18:34.982 }, 00:18:34.982 "peer_address": { 00:18:34.982 "trtype": "TCP", 00:18:34.982 "adrfam": "IPv4", 00:18:34.982 "traddr": "10.0.0.1", 00:18:34.982 "trsvcid": "38788" 00:18:34.982 }, 00:18:34.982 "auth": { 00:18:34.982 "state": "completed", 00:18:34.982 "digest": "sha384", 00:18:34.982 "dhgroup": "null" 00:18:34.982 } 00:18:34.982 } 00:18:34.982 ]' 00:18:34.982 01:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.982 01:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:34.982 01:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.982 01:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:34.982 01:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.982 01:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.982 01:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.982 01:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.242 01:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZmExZjc1NjAwMjQ3MTBjMjMzZWQ2NmZmYjIwMDVkMzU0OGE2MmI0MjFmYWM4Njg0CMe35A==: --dhchap-ctrl-secret DHHC-1:01:YWE2ZmJkNTEyNGQ5MWJlMWYxZTc2MTIyMDg3MTNiYTGb5VQY: 00:18:35.810 01:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.810 01:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:35.810 01:18:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.810 01:18:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.810 01:18:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.810 01:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:35.810 01:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:35.810 01:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:35.810 01:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:18:35.810 01:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:35.810 01:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:35.810 01:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:35.810 01:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:35.810 01:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.810 01:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:35.810 01:18:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.810 01:18:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.070 01:18:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.070 01:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:36.070 01:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:36.070 00:18:36.070 01:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:36.070 01:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:36.070 01:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.329 01:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.329 01:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.329 01:18:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.329 01:18:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.329 01:18:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.329 01:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:36.329 { 00:18:36.329 "cntlid": 55, 00:18:36.329 "qid": 0, 00:18:36.329 "state": "enabled", 00:18:36.329 "thread": "nvmf_tgt_poll_group_000", 00:18:36.329 "listen_address": { 00:18:36.329 "trtype": "TCP", 00:18:36.329 "adrfam": "IPv4", 00:18:36.329 "traddr": "10.0.0.2", 00:18:36.329 "trsvcid": "4420" 00:18:36.329 }, 00:18:36.329 "peer_address": { 00:18:36.329 "trtype": "TCP", 00:18:36.329 "adrfam": "IPv4", 00:18:36.329 "traddr": "10.0.0.1", 00:18:36.329 "trsvcid": "38806" 00:18:36.329 }, 00:18:36.329 "auth": { 00:18:36.329 "state": "completed", 00:18:36.329 "digest": "sha384", 00:18:36.329 "dhgroup": "null" 00:18:36.329 } 00:18:36.329 } 00:18:36.329 ]' 00:18:36.329 01:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:36.329 01:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:36.329 01:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:36.588 01:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:36.588 01:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:36.588 01:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.588 01:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.588 01:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.588 01:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTY5OGEzYzc1NjBkYWRmNjE0NWI3NTVjNDE4ODQyMTNkNWJlMmMyMDk0YmY0OGYyZmE5ZjAyODc5NjU5MWVmZkZcKJM=: 00:18:37.157 01:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.157 01:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:37.157 01:18:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.157 01:18:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.157 01:18:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.157 01:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:37.157 01:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:37.157 01:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:37.157 01:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:37.416 01:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:18:37.416 01:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:37.416 01:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:37.416 01:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:37.416 01:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:37.416 01:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.416 01:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.416 01:18:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.416 01:18:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.416 01:18:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.416 01:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.416 01:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.675 00:18:37.675 01:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.675 01:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.675 01:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.934 01:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.934 01:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.934 01:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.934 01:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.934 01:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.934 01:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.934 { 00:18:37.934 "cntlid": 57, 00:18:37.934 "qid": 0, 00:18:37.934 "state": "enabled", 00:18:37.934 "thread": "nvmf_tgt_poll_group_000", 00:18:37.934 "listen_address": { 00:18:37.934 "trtype": "TCP", 00:18:37.934 "adrfam": "IPv4", 00:18:37.934 "traddr": "10.0.0.2", 00:18:37.934 "trsvcid": "4420" 00:18:37.934 }, 00:18:37.934 "peer_address": { 00:18:37.934 "trtype": "TCP", 00:18:37.934 "adrfam": "IPv4", 00:18:37.934 "traddr": "10.0.0.1", 00:18:37.934 "trsvcid": "38840" 00:18:37.934 }, 00:18:37.934 "auth": { 00:18:37.934 "state": "completed", 00:18:37.934 "digest": "sha384", 00:18:37.934 "dhgroup": "ffdhe2048" 00:18:37.934 } 00:18:37.934 } 00:18:37.934 ]' 00:18:37.934 01:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.934 01:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:37.934 01:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:37.934 01:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:37.934 01:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:37.934 01:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.934 01:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.934 01:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.194 01:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NTBjMTIyZjNlZjk5NTQwMDVmNDdiNjVkOTg5ZmNmYmRkZDU1ZWEwOWE0MjhmMmJlGDSRxw==: --dhchap-ctrl-secret DHHC-1:03:MmZmNDY3OWI1YjJhNzRhMGVjZGFmZDZhMDdkNmY3ZjYzZWE1NTEwYTI5MDA1NDE3MzIwMzU0YjcwODdlMTMyMiRVQho=: 00:18:38.763 01:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.763 01:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:38.763 01:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.763 01:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.763 01:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.763 01:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:38.763 01:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:38.763 01:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:38.763 01:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:18:38.763 01:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:38.763 01:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:38.763 01:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:38.763 01:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:38.763 01:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.763 01:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.763 01:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.763 01:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.763 01:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.763 01:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.763 01:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.023 00:18:39.023 01:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:39.023 01:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:39.023 01:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.282 01:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.282 01:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.282 01:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.282 01:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.282 01:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.282 01:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:39.282 { 00:18:39.282 "cntlid": 59, 00:18:39.282 "qid": 0, 00:18:39.282 "state": "enabled", 00:18:39.282 "thread": "nvmf_tgt_poll_group_000", 00:18:39.282 "listen_address": { 00:18:39.282 "trtype": "TCP", 00:18:39.282 "adrfam": "IPv4", 00:18:39.282 "traddr": "10.0.0.2", 00:18:39.282 "trsvcid": "4420" 00:18:39.282 }, 00:18:39.282 "peer_address": { 00:18:39.282 "trtype": "TCP", 00:18:39.282 "adrfam": "IPv4", 00:18:39.282 "traddr": "10.0.0.1", 00:18:39.282 "trsvcid": "38884" 00:18:39.282 }, 00:18:39.282 "auth": { 00:18:39.282 "state": "completed", 00:18:39.282 "digest": "sha384", 00:18:39.282 "dhgroup": "ffdhe2048" 00:18:39.282 } 00:18:39.282 } 00:18:39.282 ]' 00:18:39.282 01:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:39.282 01:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:39.282 01:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:39.541 01:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:39.541 01:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:39.541 01:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.541 01:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.541 01:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.541 01:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzhjMGRiMTZmZDZlNjIzZGRhODBhMTAyMDNjMzBhMmVshvmJ: --dhchap-ctrl-secret DHHC-1:02:MmVkNjUzNjY1OGU5Zjc4OGY4NGYyYTJjMzVhMzkxMWE5OTQyZmM3ZmYyZGNhODY4jOep9A==: 00:18:40.109 01:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.110 01:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:40.110 01:19:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.110 01:19:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.110 01:19:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.110 01:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.110 01:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:40.110 01:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:40.369 01:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:18:40.369 01:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.369 01:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:40.369 01:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:40.369 01:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:40.369 01:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.369 01:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.369 01:19:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.369 01:19:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.369 01:19:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.369 01:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.369 01:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.628 00:18:40.628 01:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:40.628 01:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:40.628 01:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.628 01:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.888 01:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.888 01:19:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.888 01:19:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.888 01:19:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.888 01:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:40.888 { 00:18:40.888 "cntlid": 61, 00:18:40.888 "qid": 0, 00:18:40.888 "state": "enabled", 00:18:40.888 "thread": "nvmf_tgt_poll_group_000", 00:18:40.888 "listen_address": { 00:18:40.888 "trtype": "TCP", 00:18:40.888 "adrfam": "IPv4", 00:18:40.888 "traddr": "10.0.0.2", 00:18:40.888 "trsvcid": "4420" 00:18:40.888 }, 00:18:40.888 "peer_address": { 00:18:40.888 "trtype": "TCP", 00:18:40.888 "adrfam": "IPv4", 00:18:40.888 "traddr": "10.0.0.1", 00:18:40.888 "trsvcid": "38916" 00:18:40.888 }, 00:18:40.888 "auth": { 00:18:40.888 "state": "completed", 00:18:40.888 "digest": "sha384", 00:18:40.888 "dhgroup": "ffdhe2048" 00:18:40.888 } 00:18:40.888 } 00:18:40.888 ]' 00:18:40.888 01:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:40.888 01:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:40.888 01:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:40.888 01:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:40.888 01:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:40.888 01:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.888 01:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.888 01:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.146 01:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZmExZjc1NjAwMjQ3MTBjMjMzZWQ2NmZmYjIwMDVkMzU0OGE2MmI0MjFmYWM4Njg0CMe35A==: --dhchap-ctrl-secret DHHC-1:01:YWE2ZmJkNTEyNGQ5MWJlMWYxZTc2MTIyMDg3MTNiYTGb5VQY: 00:18:41.715 01:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.715 01:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:41.715 01:19:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.715 01:19:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.715 01:19:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.715 01:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:41.715 01:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:41.715 01:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:41.715 01:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:18:41.715 01:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:41.715 01:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:41.715 01:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:41.715 01:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:41.715 01:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.715 01:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:41.715 01:19:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.715 01:19:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.715 01:19:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.715 01:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:41.715 01:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:41.974 00:18:41.974 01:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:41.974 01:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:41.974 01:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.233 01:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.233 01:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.233 01:19:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.233 01:19:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.233 01:19:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.233 01:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:42.233 { 00:18:42.233 "cntlid": 63, 00:18:42.233 "qid": 0, 00:18:42.233 "state": "enabled", 00:18:42.233 "thread": "nvmf_tgt_poll_group_000", 00:18:42.233 "listen_address": { 00:18:42.233 "trtype": "TCP", 00:18:42.233 "adrfam": "IPv4", 00:18:42.233 "traddr": "10.0.0.2", 00:18:42.233 "trsvcid": "4420" 00:18:42.233 }, 00:18:42.233 "peer_address": { 00:18:42.233 "trtype": "TCP", 00:18:42.233 "adrfam": "IPv4", 00:18:42.233 "traddr": "10.0.0.1", 00:18:42.233 "trsvcid": "38946" 00:18:42.233 }, 00:18:42.233 "auth": { 00:18:42.233 "state": "completed", 00:18:42.233 "digest": "sha384", 00:18:42.233 "dhgroup": "ffdhe2048" 00:18:42.233 } 00:18:42.233 } 00:18:42.233 ]' 00:18:42.233 01:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:42.233 01:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:42.233 01:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:42.233 01:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:42.233 01:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:42.493 01:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.493 01:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.493 01:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.493 01:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTY5OGEzYzc1NjBkYWRmNjE0NWI3NTVjNDE4ODQyMTNkNWJlMmMyMDk0YmY0OGYyZmE5ZjAyODc5NjU5MWVmZkZcKJM=: 00:18:43.062 01:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.062 01:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:43.062 01:19:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.062 01:19:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.062 01:19:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.062 01:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:43.062 01:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:43.062 01:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:43.062 01:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:43.321 01:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:18:43.321 01:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:43.321 01:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:43.321 01:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:43.321 01:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:43.321 01:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.321 01:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.321 01:19:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.321 01:19:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.321 01:19:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.321 01:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.321 01:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.580 00:18:43.580 01:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:43.580 01:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:43.580 01:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.840 01:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.840 01:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.840 01:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.840 01:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.840 01:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.840 01:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:43.840 { 00:18:43.840 "cntlid": 65, 00:18:43.840 "qid": 0, 00:18:43.840 "state": "enabled", 00:18:43.840 "thread": "nvmf_tgt_poll_group_000", 00:18:43.840 "listen_address": { 00:18:43.840 "trtype": "TCP", 00:18:43.840 "adrfam": "IPv4", 00:18:43.840 "traddr": "10.0.0.2", 00:18:43.840 "trsvcid": "4420" 00:18:43.840 }, 00:18:43.840 "peer_address": { 00:18:43.840 "trtype": "TCP", 00:18:43.840 "adrfam": "IPv4", 00:18:43.840 "traddr": "10.0.0.1", 00:18:43.840 "trsvcid": "38978" 00:18:43.840 }, 00:18:43.840 "auth": { 00:18:43.840 "state": "completed", 00:18:43.840 "digest": "sha384", 00:18:43.840 "dhgroup": "ffdhe3072" 00:18:43.840 } 00:18:43.840 } 00:18:43.840 ]' 00:18:43.840 01:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:43.840 01:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:43.840 01:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:43.840 01:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:43.840 01:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:43.840 01:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.840 01:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.840 01:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.099 01:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NTBjMTIyZjNlZjk5NTQwMDVmNDdiNjVkOTg5ZmNmYmRkZDU1ZWEwOWE0MjhmMmJlGDSRxw==: --dhchap-ctrl-secret DHHC-1:03:MmZmNDY3OWI1YjJhNzRhMGVjZGFmZDZhMDdkNmY3ZjYzZWE1NTEwYTI5MDA1NDE3MzIwMzU0YjcwODdlMTMyMiRVQho=: 00:18:44.668 01:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.668 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.668 01:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:44.668 01:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.668 01:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.668 01:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.668 01:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:44.668 01:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:44.668 01:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:44.668 01:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:18:44.668 01:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:44.668 01:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:44.668 01:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:44.668 01:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:44.668 01:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.668 01:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.668 01:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.668 01:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.668 01:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.668 01:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.668 01:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.927 00:18:44.927 01:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:44.927 01:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:44.927 01:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.186 01:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.186 01:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.186 01:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.186 01:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.186 01:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.186 01:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.186 { 00:18:45.186 "cntlid": 67, 00:18:45.186 "qid": 0, 00:18:45.186 "state": "enabled", 00:18:45.186 "thread": "nvmf_tgt_poll_group_000", 00:18:45.186 "listen_address": { 00:18:45.186 "trtype": "TCP", 00:18:45.186 "adrfam": "IPv4", 00:18:45.186 "traddr": "10.0.0.2", 00:18:45.186 "trsvcid": "4420" 00:18:45.186 }, 00:18:45.186 "peer_address": { 00:18:45.186 "trtype": "TCP", 00:18:45.186 "adrfam": "IPv4", 00:18:45.186 "traddr": "10.0.0.1", 00:18:45.186 "trsvcid": "44688" 00:18:45.186 }, 00:18:45.186 "auth": { 00:18:45.186 "state": "completed", 00:18:45.186 "digest": "sha384", 00:18:45.186 "dhgroup": "ffdhe3072" 00:18:45.186 } 00:18:45.186 } 00:18:45.186 ]' 00:18:45.186 01:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.186 01:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:45.186 01:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.186 01:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:45.186 01:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:45.445 01:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.445 01:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.445 01:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.445 01:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzhjMGRiMTZmZDZlNjIzZGRhODBhMTAyMDNjMzBhMmVshvmJ: --dhchap-ctrl-secret DHHC-1:02:MmVkNjUzNjY1OGU5Zjc4OGY4NGYyYTJjMzVhMzkxMWE5OTQyZmM3ZmYyZGNhODY4jOep9A==: 00:18:46.013 01:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.013 01:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:46.013 01:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.013 01:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.013 01:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.013 01:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:46.013 01:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:46.014 01:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:46.272 01:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:18:46.272 01:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.272 01:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:46.272 01:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:46.272 01:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:46.272 01:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.272 01:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.272 01:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.272 01:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.272 01:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.272 01:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.272 01:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.532 00:18:46.532 01:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:46.532 01:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:46.532 01:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.791 01:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.791 01:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.791 01:19:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.791 01:19:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.791 01:19:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.791 01:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:46.791 { 00:18:46.791 "cntlid": 69, 00:18:46.791 "qid": 0, 00:18:46.791 "state": "enabled", 00:18:46.791 "thread": "nvmf_tgt_poll_group_000", 00:18:46.791 "listen_address": { 00:18:46.791 "trtype": "TCP", 00:18:46.791 "adrfam": "IPv4", 00:18:46.791 "traddr": "10.0.0.2", 00:18:46.791 "trsvcid": "4420" 00:18:46.791 }, 00:18:46.791 "peer_address": { 00:18:46.791 "trtype": "TCP", 00:18:46.791 "adrfam": "IPv4", 00:18:46.791 "traddr": "10.0.0.1", 00:18:46.791 "trsvcid": "44714" 00:18:46.791 }, 00:18:46.791 "auth": { 00:18:46.791 "state": "completed", 00:18:46.791 "digest": "sha384", 00:18:46.791 "dhgroup": "ffdhe3072" 00:18:46.791 } 00:18:46.791 } 00:18:46.791 ]' 00:18:46.791 01:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:46.791 01:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:46.791 01:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:46.791 01:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:46.791 01:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:46.791 01:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.791 01:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.791 01:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.050 01:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZmExZjc1NjAwMjQ3MTBjMjMzZWQ2NmZmYjIwMDVkMzU0OGE2MmI0MjFmYWM4Njg0CMe35A==: --dhchap-ctrl-secret DHHC-1:01:YWE2ZmJkNTEyNGQ5MWJlMWYxZTc2MTIyMDg3MTNiYTGb5VQY: 00:18:47.619 01:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.619 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.619 01:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:47.619 01:19:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.619 01:19:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.619 01:19:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.619 01:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:47.619 01:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:47.619 01:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:47.619 01:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:18:47.619 01:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:47.619 01:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:47.619 01:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:47.619 01:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:47.619 01:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.619 01:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:47.619 01:19:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.619 01:19:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.619 01:19:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.619 01:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:47.619 01:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:47.878 00:18:47.878 01:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:47.878 01:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:47.878 01:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.136 01:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.136 01:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.136 01:19:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.136 01:19:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.136 01:19:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.136 01:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:48.136 { 00:18:48.136 "cntlid": 71, 00:18:48.136 "qid": 0, 00:18:48.136 "state": "enabled", 00:18:48.136 "thread": "nvmf_tgt_poll_group_000", 00:18:48.136 "listen_address": { 00:18:48.136 "trtype": "TCP", 00:18:48.136 "adrfam": "IPv4", 00:18:48.136 "traddr": "10.0.0.2", 00:18:48.136 "trsvcid": "4420" 00:18:48.136 }, 00:18:48.136 "peer_address": { 00:18:48.136 "trtype": "TCP", 00:18:48.136 "adrfam": "IPv4", 00:18:48.136 "traddr": "10.0.0.1", 00:18:48.136 "trsvcid": "44730" 00:18:48.136 }, 00:18:48.136 "auth": { 00:18:48.136 "state": "completed", 00:18:48.136 "digest": "sha384", 00:18:48.136 "dhgroup": "ffdhe3072" 00:18:48.136 } 00:18:48.136 } 00:18:48.136 ]' 00:18:48.136 01:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:48.136 01:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:48.136 01:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:48.136 01:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:48.136 01:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:48.394 01:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.394 01:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.395 01:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.395 01:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTY5OGEzYzc1NjBkYWRmNjE0NWI3NTVjNDE4ODQyMTNkNWJlMmMyMDk0YmY0OGYyZmE5ZjAyODc5NjU5MWVmZkZcKJM=: 00:18:48.975 01:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.975 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.975 01:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:48.975 01:19:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.975 01:19:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.975 01:19:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.975 01:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:48.975 01:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.975 01:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:48.975 01:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:49.245 01:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:18:49.245 01:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:49.245 01:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:49.245 01:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:49.245 01:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:49.245 01:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.245 01:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.245 01:19:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.245 01:19:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.245 01:19:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.245 01:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.245 01:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.506 00:18:49.506 01:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:49.506 01:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:49.506 01:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.767 01:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.767 01:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.767 01:19:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.767 01:19:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.767 01:19:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.767 01:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:49.767 { 00:18:49.767 "cntlid": 73, 00:18:49.767 "qid": 0, 00:18:49.767 "state": "enabled", 00:18:49.767 "thread": "nvmf_tgt_poll_group_000", 00:18:49.767 "listen_address": { 00:18:49.767 "trtype": "TCP", 00:18:49.767 "adrfam": "IPv4", 00:18:49.767 "traddr": "10.0.0.2", 00:18:49.767 "trsvcid": "4420" 00:18:49.767 }, 00:18:49.767 "peer_address": { 00:18:49.767 "trtype": "TCP", 00:18:49.767 "adrfam": "IPv4", 00:18:49.767 "traddr": "10.0.0.1", 00:18:49.767 "trsvcid": "44766" 00:18:49.767 }, 00:18:49.767 "auth": { 00:18:49.767 "state": "completed", 00:18:49.767 "digest": "sha384", 00:18:49.767 "dhgroup": "ffdhe4096" 00:18:49.767 } 00:18:49.767 } 00:18:49.767 ]' 00:18:49.767 01:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:49.767 01:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:49.767 01:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:49.767 01:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:49.767 01:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:49.767 01:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.767 01:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.767 01:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.027 01:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NTBjMTIyZjNlZjk5NTQwMDVmNDdiNjVkOTg5ZmNmYmRkZDU1ZWEwOWE0MjhmMmJlGDSRxw==: --dhchap-ctrl-secret DHHC-1:03:MmZmNDY3OWI1YjJhNzRhMGVjZGFmZDZhMDdkNmY3ZjYzZWE1NTEwYTI5MDA1NDE3MzIwMzU0YjcwODdlMTMyMiRVQho=: 00:18:50.596 01:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.596 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.596 01:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:50.596 01:19:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.596 01:19:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.596 01:19:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.596 01:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:50.596 01:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:50.596 01:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:50.596 01:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:18:50.596 01:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:50.596 01:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:50.596 01:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:50.596 01:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:50.596 01:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.596 01:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.596 01:19:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.596 01:19:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.596 01:19:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.596 01:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.596 01:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.856 00:18:50.856 01:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:50.856 01:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.856 01:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:51.117 01:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.117 01:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.117 01:19:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.117 01:19:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.117 01:19:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.117 01:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:51.117 { 00:18:51.117 "cntlid": 75, 00:18:51.117 "qid": 0, 00:18:51.117 "state": "enabled", 00:18:51.117 "thread": "nvmf_tgt_poll_group_000", 00:18:51.117 "listen_address": { 00:18:51.117 "trtype": "TCP", 00:18:51.117 "adrfam": "IPv4", 00:18:51.117 "traddr": "10.0.0.2", 00:18:51.117 "trsvcid": "4420" 00:18:51.117 }, 00:18:51.117 "peer_address": { 00:18:51.117 "trtype": "TCP", 00:18:51.117 "adrfam": "IPv4", 00:18:51.117 "traddr": "10.0.0.1", 00:18:51.117 "trsvcid": "44806" 00:18:51.117 }, 00:18:51.117 "auth": { 00:18:51.117 "state": "completed", 00:18:51.117 "digest": "sha384", 00:18:51.117 "dhgroup": "ffdhe4096" 00:18:51.117 } 00:18:51.117 } 00:18:51.117 ]' 00:18:51.117 01:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:51.117 01:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:51.117 01:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:51.117 01:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:51.117 01:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:51.377 01:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.377 01:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.377 01:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.377 01:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzhjMGRiMTZmZDZlNjIzZGRhODBhMTAyMDNjMzBhMmVshvmJ: --dhchap-ctrl-secret DHHC-1:02:MmVkNjUzNjY1OGU5Zjc4OGY4NGYyYTJjMzVhMzkxMWE5OTQyZmM3ZmYyZGNhODY4jOep9A==: 00:18:51.948 01:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.948 01:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:51.948 01:19:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.948 01:19:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.948 01:19:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.948 01:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:51.948 01:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:51.948 01:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:52.208 01:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:18:52.208 01:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:52.208 01:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:52.208 01:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:52.208 01:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:52.208 01:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.208 01:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.208 01:19:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.208 01:19:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.208 01:19:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.208 01:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.208 01:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.469 00:18:52.469 01:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:52.469 01:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:52.469 01:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.469 01:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.469 01:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.469 01:19:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.469 01:19:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.729 01:19:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.729 01:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:52.729 { 00:18:52.729 "cntlid": 77, 00:18:52.729 "qid": 0, 00:18:52.729 "state": "enabled", 00:18:52.729 "thread": "nvmf_tgt_poll_group_000", 00:18:52.729 "listen_address": { 00:18:52.729 "trtype": "TCP", 00:18:52.729 "adrfam": "IPv4", 00:18:52.729 "traddr": "10.0.0.2", 00:18:52.729 "trsvcid": "4420" 00:18:52.729 }, 00:18:52.729 "peer_address": { 00:18:52.729 "trtype": "TCP", 00:18:52.729 "adrfam": "IPv4", 00:18:52.729 "traddr": "10.0.0.1", 00:18:52.729 "trsvcid": "44820" 00:18:52.729 }, 00:18:52.729 "auth": { 00:18:52.729 "state": "completed", 00:18:52.729 "digest": "sha384", 00:18:52.729 "dhgroup": "ffdhe4096" 00:18:52.729 } 00:18:52.729 } 00:18:52.729 ]' 00:18:52.729 01:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:52.729 01:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:52.729 01:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.729 01:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:52.729 01:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.729 01:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.729 01:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.729 01:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.988 01:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZmExZjc1NjAwMjQ3MTBjMjMzZWQ2NmZmYjIwMDVkMzU0OGE2MmI0MjFmYWM4Njg0CMe35A==: --dhchap-ctrl-secret DHHC-1:01:YWE2ZmJkNTEyNGQ5MWJlMWYxZTc2MTIyMDg3MTNiYTGb5VQY: 00:18:53.558 01:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.558 01:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:53.558 01:19:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.558 01:19:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.558 01:19:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.558 01:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:53.558 01:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:53.558 01:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:53.558 01:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:18:53.558 01:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:53.558 01:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:53.558 01:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:53.558 01:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:53.558 01:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.558 01:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:53.558 01:19:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.558 01:19:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.558 01:19:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.558 01:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:53.558 01:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:53.817 00:18:53.817 01:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:53.817 01:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:53.817 01:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.077 01:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.077 01:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.077 01:19:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.077 01:19:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.077 01:19:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.077 01:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:54.077 { 00:18:54.077 "cntlid": 79, 00:18:54.077 "qid": 0, 00:18:54.077 "state": "enabled", 00:18:54.077 "thread": "nvmf_tgt_poll_group_000", 00:18:54.077 "listen_address": { 00:18:54.077 "trtype": "TCP", 00:18:54.077 "adrfam": "IPv4", 00:18:54.077 "traddr": "10.0.0.2", 00:18:54.077 "trsvcid": "4420" 00:18:54.077 }, 00:18:54.077 "peer_address": { 00:18:54.077 "trtype": "TCP", 00:18:54.077 "adrfam": "IPv4", 00:18:54.077 "traddr": "10.0.0.1", 00:18:54.077 "trsvcid": "44854" 00:18:54.077 }, 00:18:54.077 "auth": { 00:18:54.077 "state": "completed", 00:18:54.077 "digest": "sha384", 00:18:54.077 "dhgroup": "ffdhe4096" 00:18:54.077 } 00:18:54.077 } 00:18:54.077 ]' 00:18:54.077 01:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:54.077 01:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:54.077 01:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:54.077 01:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:54.077 01:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:54.077 01:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.077 01:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.077 01:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.337 01:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTY5OGEzYzc1NjBkYWRmNjE0NWI3NTVjNDE4ODQyMTNkNWJlMmMyMDk0YmY0OGYyZmE5ZjAyODc5NjU5MWVmZkZcKJM=: 00:18:54.907 01:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.907 01:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:54.907 01:19:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.907 01:19:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.907 01:19:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.907 01:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:54.907 01:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:54.907 01:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:54.907 01:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:55.167 01:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:18:55.167 01:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:55.167 01:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:55.167 01:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:55.167 01:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:55.167 01:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.167 01:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.167 01:19:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.167 01:19:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.167 01:19:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.167 01:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.167 01:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.427 00:18:55.427 01:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:55.427 01:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:55.427 01:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.687 01:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.687 01:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.687 01:19:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.687 01:19:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.687 01:19:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.687 01:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:55.687 { 00:18:55.687 "cntlid": 81, 00:18:55.687 "qid": 0, 00:18:55.687 "state": "enabled", 00:18:55.687 "thread": "nvmf_tgt_poll_group_000", 00:18:55.687 "listen_address": { 00:18:55.687 "trtype": "TCP", 00:18:55.687 "adrfam": "IPv4", 00:18:55.687 "traddr": "10.0.0.2", 00:18:55.687 "trsvcid": "4420" 00:18:55.687 }, 00:18:55.687 "peer_address": { 00:18:55.687 "trtype": "TCP", 00:18:55.687 "adrfam": "IPv4", 00:18:55.687 "traddr": "10.0.0.1", 00:18:55.687 "trsvcid": "59436" 00:18:55.687 }, 00:18:55.687 "auth": { 00:18:55.687 "state": "completed", 00:18:55.687 "digest": "sha384", 00:18:55.687 "dhgroup": "ffdhe6144" 00:18:55.687 } 00:18:55.687 } 00:18:55.687 ]' 00:18:55.687 01:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:55.687 01:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:55.687 01:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:55.687 01:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:55.687 01:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:55.687 01:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.687 01:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.687 01:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.953 01:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NTBjMTIyZjNlZjk5NTQwMDVmNDdiNjVkOTg5ZmNmYmRkZDU1ZWEwOWE0MjhmMmJlGDSRxw==: --dhchap-ctrl-secret DHHC-1:03:MmZmNDY3OWI1YjJhNzRhMGVjZGFmZDZhMDdkNmY3ZjYzZWE1NTEwYTI5MDA1NDE3MzIwMzU0YjcwODdlMTMyMiRVQho=: 00:18:56.522 01:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.522 01:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:56.522 01:19:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.522 01:19:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.522 01:19:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.522 01:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:56.522 01:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:56.522 01:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:56.522 01:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:18:56.522 01:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.522 01:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:56.522 01:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:56.522 01:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:56.522 01:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.522 01:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.522 01:19:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.522 01:19:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.522 01:19:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.522 01:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.522 01:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.780 00:18:56.780 01:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.780 01:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:56.780 01:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.039 01:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.039 01:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.039 01:19:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.039 01:19:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.039 01:19:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.039 01:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:57.039 { 00:18:57.039 "cntlid": 83, 00:18:57.039 "qid": 0, 00:18:57.039 "state": "enabled", 00:18:57.039 "thread": "nvmf_tgt_poll_group_000", 00:18:57.039 "listen_address": { 00:18:57.039 "trtype": "TCP", 00:18:57.039 "adrfam": "IPv4", 00:18:57.039 "traddr": "10.0.0.2", 00:18:57.039 "trsvcid": "4420" 00:18:57.039 }, 00:18:57.039 "peer_address": { 00:18:57.039 "trtype": "TCP", 00:18:57.039 "adrfam": "IPv4", 00:18:57.039 "traddr": "10.0.0.1", 00:18:57.039 "trsvcid": "59478" 00:18:57.039 }, 00:18:57.039 "auth": { 00:18:57.039 "state": "completed", 00:18:57.039 "digest": "sha384", 00:18:57.039 "dhgroup": "ffdhe6144" 00:18:57.039 } 00:18:57.039 } 00:18:57.039 ]' 00:18:57.039 01:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:57.039 01:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:57.039 01:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:57.039 01:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:57.039 01:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.298 01:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.298 01:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.298 01:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.298 01:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzhjMGRiMTZmZDZlNjIzZGRhODBhMTAyMDNjMzBhMmVshvmJ: --dhchap-ctrl-secret DHHC-1:02:MmVkNjUzNjY1OGU5Zjc4OGY4NGYyYTJjMzVhMzkxMWE5OTQyZmM3ZmYyZGNhODY4jOep9A==: 00:18:57.867 01:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.867 01:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:57.867 01:19:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.867 01:19:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.867 01:19:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.868 01:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:57.868 01:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:57.868 01:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:58.128 01:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:18:58.128 01:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.128 01:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:58.128 01:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:58.128 01:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:58.128 01:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.128 01:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.128 01:19:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.128 01:19:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.128 01:19:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.128 01:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.128 01:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.388 00:18:58.388 01:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:58.388 01:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:58.388 01:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.648 01:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.648 01:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.648 01:19:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.648 01:19:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.648 01:19:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.648 01:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:58.648 { 00:18:58.648 "cntlid": 85, 00:18:58.648 "qid": 0, 00:18:58.648 "state": "enabled", 00:18:58.648 "thread": "nvmf_tgt_poll_group_000", 00:18:58.648 "listen_address": { 00:18:58.648 "trtype": "TCP", 00:18:58.648 "adrfam": "IPv4", 00:18:58.648 "traddr": "10.0.0.2", 00:18:58.648 "trsvcid": "4420" 00:18:58.648 }, 00:18:58.648 "peer_address": { 00:18:58.648 "trtype": "TCP", 00:18:58.648 "adrfam": "IPv4", 00:18:58.648 "traddr": "10.0.0.1", 00:18:58.648 "trsvcid": "59510" 00:18:58.648 }, 00:18:58.648 "auth": { 00:18:58.648 "state": "completed", 00:18:58.648 "digest": "sha384", 00:18:58.648 "dhgroup": "ffdhe6144" 00:18:58.648 } 00:18:58.648 } 00:18:58.648 ]' 00:18:58.648 01:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:58.648 01:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:58.648 01:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:58.648 01:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:58.648 01:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:58.648 01:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.648 01:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.648 01:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.909 01:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZmExZjc1NjAwMjQ3MTBjMjMzZWQ2NmZmYjIwMDVkMzU0OGE2MmI0MjFmYWM4Njg0CMe35A==: --dhchap-ctrl-secret DHHC-1:01:YWE2ZmJkNTEyNGQ5MWJlMWYxZTc2MTIyMDg3MTNiYTGb5VQY: 00:18:59.478 01:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.478 01:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:59.478 01:19:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.478 01:19:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.478 01:19:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.478 01:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:59.478 01:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:59.478 01:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:59.738 01:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:18:59.738 01:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:59.738 01:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:59.738 01:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:59.738 01:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:59.738 01:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.738 01:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:59.738 01:19:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.738 01:19:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.738 01:19:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.738 01:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:59.738 01:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:59.997 00:18:59.997 01:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:59.997 01:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:59.997 01:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.257 01:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.257 01:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.257 01:19:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.257 01:19:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.257 01:19:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.257 01:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:00.257 { 00:19:00.257 "cntlid": 87, 00:19:00.257 "qid": 0, 00:19:00.257 "state": "enabled", 00:19:00.257 "thread": "nvmf_tgt_poll_group_000", 00:19:00.257 "listen_address": { 00:19:00.257 "trtype": "TCP", 00:19:00.257 "adrfam": "IPv4", 00:19:00.257 "traddr": "10.0.0.2", 00:19:00.257 "trsvcid": "4420" 00:19:00.257 }, 00:19:00.257 "peer_address": { 00:19:00.257 "trtype": "TCP", 00:19:00.257 "adrfam": "IPv4", 00:19:00.257 "traddr": "10.0.0.1", 00:19:00.257 "trsvcid": "59538" 00:19:00.257 }, 00:19:00.257 "auth": { 00:19:00.257 "state": "completed", 00:19:00.257 "digest": "sha384", 00:19:00.257 "dhgroup": "ffdhe6144" 00:19:00.257 } 00:19:00.257 } 00:19:00.257 ]' 00:19:00.257 01:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:00.257 01:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:00.257 01:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:00.257 01:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:00.257 01:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.257 01:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.257 01:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.257 01:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.518 01:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTY5OGEzYzc1NjBkYWRmNjE0NWI3NTVjNDE4ODQyMTNkNWJlMmMyMDk0YmY0OGYyZmE5ZjAyODc5NjU5MWVmZkZcKJM=: 00:19:01.087 01:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.087 01:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:01.087 01:19:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.087 01:19:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.087 01:19:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.087 01:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:01.087 01:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:01.087 01:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:01.087 01:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:01.087 01:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:19:01.087 01:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:01.087 01:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:01.087 01:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:01.087 01:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:01.087 01:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.087 01:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.087 01:19:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.088 01:19:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.088 01:19:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.088 01:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.088 01:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.657 00:19:01.657 01:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:01.657 01:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:01.657 01:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.917 01:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.917 01:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.917 01:19:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.917 01:19:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.917 01:19:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.917 01:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:01.917 { 00:19:01.917 "cntlid": 89, 00:19:01.917 "qid": 0, 00:19:01.917 "state": "enabled", 00:19:01.917 "thread": "nvmf_tgt_poll_group_000", 00:19:01.917 "listen_address": { 00:19:01.917 "trtype": "TCP", 00:19:01.917 "adrfam": "IPv4", 00:19:01.917 "traddr": "10.0.0.2", 00:19:01.917 "trsvcid": "4420" 00:19:01.917 }, 00:19:01.917 "peer_address": { 00:19:01.917 "trtype": "TCP", 00:19:01.917 "adrfam": "IPv4", 00:19:01.917 "traddr": "10.0.0.1", 00:19:01.917 "trsvcid": "59582" 00:19:01.917 }, 00:19:01.917 "auth": { 00:19:01.917 "state": "completed", 00:19:01.917 "digest": "sha384", 00:19:01.917 "dhgroup": "ffdhe8192" 00:19:01.917 } 00:19:01.917 } 00:19:01.917 ]' 00:19:01.917 01:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:01.917 01:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:01.917 01:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:01.917 01:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:01.917 01:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:01.917 01:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.917 01:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.917 01:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.177 01:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NTBjMTIyZjNlZjk5NTQwMDVmNDdiNjVkOTg5ZmNmYmRkZDU1ZWEwOWE0MjhmMmJlGDSRxw==: --dhchap-ctrl-secret DHHC-1:03:MmZmNDY3OWI1YjJhNzRhMGVjZGFmZDZhMDdkNmY3ZjYzZWE1NTEwYTI5MDA1NDE3MzIwMzU0YjcwODdlMTMyMiRVQho=: 00:19:02.748 01:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.748 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.748 01:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:02.748 01:19:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.748 01:19:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.748 01:19:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.748 01:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:02.748 01:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:02.748 01:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:03.008 01:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:19:03.008 01:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:03.008 01:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:03.008 01:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:03.008 01:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:03.008 01:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.008 01:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.008 01:19:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.008 01:19:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.008 01:19:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.008 01:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.008 01:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.270 00:19:03.572 01:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:03.572 01:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.572 01:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.572 01:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.572 01:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.572 01:19:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.572 01:19:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.572 01:19:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.572 01:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:03.572 { 00:19:03.572 "cntlid": 91, 00:19:03.572 "qid": 0, 00:19:03.572 "state": "enabled", 00:19:03.572 "thread": "nvmf_tgt_poll_group_000", 00:19:03.572 "listen_address": { 00:19:03.572 "trtype": "TCP", 00:19:03.572 "adrfam": "IPv4", 00:19:03.572 "traddr": "10.0.0.2", 00:19:03.572 "trsvcid": "4420" 00:19:03.572 }, 00:19:03.572 "peer_address": { 00:19:03.572 "trtype": "TCP", 00:19:03.572 "adrfam": "IPv4", 00:19:03.572 "traddr": "10.0.0.1", 00:19:03.572 "trsvcid": "59608" 00:19:03.572 }, 00:19:03.572 "auth": { 00:19:03.572 "state": "completed", 00:19:03.572 "digest": "sha384", 00:19:03.572 "dhgroup": "ffdhe8192" 00:19:03.572 } 00:19:03.572 } 00:19:03.572 ]' 00:19:03.572 01:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:03.572 01:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:03.572 01:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.572 01:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:03.572 01:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.832 01:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.832 01:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.832 01:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.832 01:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzhjMGRiMTZmZDZlNjIzZGRhODBhMTAyMDNjMzBhMmVshvmJ: --dhchap-ctrl-secret DHHC-1:02:MmVkNjUzNjY1OGU5Zjc4OGY4NGYyYTJjMzVhMzkxMWE5OTQyZmM3ZmYyZGNhODY4jOep9A==: 00:19:04.403 01:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.403 01:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:04.403 01:19:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.403 01:19:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.403 01:19:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.403 01:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.403 01:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:04.403 01:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:04.663 01:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:19:04.663 01:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.663 01:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:04.663 01:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:04.663 01:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:04.663 01:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.663 01:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.663 01:19:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.663 01:19:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.663 01:19:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.663 01:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.663 01:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.232 00:19:05.232 01:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:05.232 01:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.232 01:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.232 01:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.232 01:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.232 01:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.232 01:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.232 01:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.232 01:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.232 { 00:19:05.232 "cntlid": 93, 00:19:05.232 "qid": 0, 00:19:05.232 "state": "enabled", 00:19:05.232 "thread": "nvmf_tgt_poll_group_000", 00:19:05.232 "listen_address": { 00:19:05.232 "trtype": "TCP", 00:19:05.232 "adrfam": "IPv4", 00:19:05.232 "traddr": "10.0.0.2", 00:19:05.232 "trsvcid": "4420" 00:19:05.232 }, 00:19:05.232 "peer_address": { 00:19:05.232 "trtype": "TCP", 00:19:05.232 "adrfam": "IPv4", 00:19:05.232 "traddr": "10.0.0.1", 00:19:05.232 "trsvcid": "39038" 00:19:05.232 }, 00:19:05.232 "auth": { 00:19:05.232 "state": "completed", 00:19:05.232 "digest": "sha384", 00:19:05.232 "dhgroup": "ffdhe8192" 00:19:05.232 } 00:19:05.232 } 00:19:05.232 ]' 00:19:05.232 01:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.232 01:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:05.232 01:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.493 01:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:05.493 01:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.493 01:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.493 01:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.493 01:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.493 01:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZmExZjc1NjAwMjQ3MTBjMjMzZWQ2NmZmYjIwMDVkMzU0OGE2MmI0MjFmYWM4Njg0CMe35A==: --dhchap-ctrl-secret DHHC-1:01:YWE2ZmJkNTEyNGQ5MWJlMWYxZTc2MTIyMDg3MTNiYTGb5VQY: 00:19:06.063 01:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.063 01:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:06.063 01:19:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.063 01:19:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.063 01:19:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.063 01:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.063 01:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:06.063 01:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:06.324 01:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:19:06.324 01:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.324 01:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:06.324 01:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:06.324 01:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:06.324 01:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.324 01:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:06.324 01:19:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.324 01:19:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.324 01:19:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.324 01:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:06.324 01:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:06.894 00:19:06.894 01:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:06.894 01:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:06.894 01:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.894 01:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.894 01:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.894 01:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.894 01:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.894 01:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.894 01:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:06.894 { 00:19:06.894 "cntlid": 95, 00:19:06.894 "qid": 0, 00:19:06.894 "state": "enabled", 00:19:06.894 "thread": "nvmf_tgt_poll_group_000", 00:19:06.894 "listen_address": { 00:19:06.894 "trtype": "TCP", 00:19:06.894 "adrfam": "IPv4", 00:19:06.894 "traddr": "10.0.0.2", 00:19:06.894 "trsvcid": "4420" 00:19:06.894 }, 00:19:06.894 "peer_address": { 00:19:06.894 "trtype": "TCP", 00:19:06.894 "adrfam": "IPv4", 00:19:06.894 "traddr": "10.0.0.1", 00:19:06.894 "trsvcid": "39064" 00:19:06.894 }, 00:19:06.894 "auth": { 00:19:06.894 "state": "completed", 00:19:06.894 "digest": "sha384", 00:19:06.894 "dhgroup": "ffdhe8192" 00:19:06.894 } 00:19:06.894 } 00:19:06.894 ]' 00:19:06.894 01:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.155 01:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:07.155 01:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.155 01:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:07.155 01:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.155 01:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.155 01:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.155 01:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.415 01:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTY5OGEzYzc1NjBkYWRmNjE0NWI3NTVjNDE4ODQyMTNkNWJlMmMyMDk0YmY0OGYyZmE5ZjAyODc5NjU5MWVmZkZcKJM=: 00:19:07.983 01:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.983 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.983 01:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:07.983 01:19:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.984 01:19:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.984 01:19:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.984 01:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:07.984 01:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:07.984 01:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:07.984 01:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:07.984 01:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:07.984 01:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:19:07.984 01:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:07.984 01:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:07.984 01:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:07.984 01:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:07.984 01:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.984 01:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.984 01:19:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.984 01:19:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.984 01:19:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.984 01:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.984 01:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.244 00:19:08.244 01:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:08.244 01:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:08.244 01:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.504 01:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.504 01:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.504 01:19:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.504 01:19:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.504 01:19:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.504 01:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:08.504 { 00:19:08.504 "cntlid": 97, 00:19:08.504 "qid": 0, 00:19:08.504 "state": "enabled", 00:19:08.504 "thread": "nvmf_tgt_poll_group_000", 00:19:08.504 "listen_address": { 00:19:08.504 "trtype": "TCP", 00:19:08.504 "adrfam": "IPv4", 00:19:08.504 "traddr": "10.0.0.2", 00:19:08.504 "trsvcid": "4420" 00:19:08.504 }, 00:19:08.504 "peer_address": { 00:19:08.504 "trtype": "TCP", 00:19:08.504 "adrfam": "IPv4", 00:19:08.504 "traddr": "10.0.0.1", 00:19:08.504 "trsvcid": "39096" 00:19:08.504 }, 00:19:08.504 "auth": { 00:19:08.504 "state": "completed", 00:19:08.504 "digest": "sha512", 00:19:08.504 "dhgroup": "null" 00:19:08.504 } 00:19:08.504 } 00:19:08.504 ]' 00:19:08.504 01:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:08.504 01:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:08.504 01:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:08.504 01:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:08.504 01:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:08.504 01:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.504 01:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.504 01:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.764 01:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NTBjMTIyZjNlZjk5NTQwMDVmNDdiNjVkOTg5ZmNmYmRkZDU1ZWEwOWE0MjhmMmJlGDSRxw==: --dhchap-ctrl-secret DHHC-1:03:MmZmNDY3OWI1YjJhNzRhMGVjZGFmZDZhMDdkNmY3ZjYzZWE1NTEwYTI5MDA1NDE3MzIwMzU0YjcwODdlMTMyMiRVQho=: 00:19:09.335 01:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.335 01:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:09.335 01:19:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.335 01:19:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.335 01:19:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.335 01:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:09.335 01:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:09.335 01:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:09.335 01:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:19:09.335 01:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.335 01:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:09.335 01:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:09.335 01:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:09.335 01:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.335 01:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.335 01:19:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.335 01:19:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.595 01:19:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.596 01:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.596 01:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.596 00:19:09.596 01:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.596 01:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.596 01:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.856 01:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.856 01:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.856 01:19:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.856 01:19:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.856 01:19:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.856 01:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:09.856 { 00:19:09.856 "cntlid": 99, 00:19:09.856 "qid": 0, 00:19:09.856 "state": "enabled", 00:19:09.856 "thread": "nvmf_tgt_poll_group_000", 00:19:09.856 "listen_address": { 00:19:09.856 "trtype": "TCP", 00:19:09.856 "adrfam": "IPv4", 00:19:09.856 "traddr": "10.0.0.2", 00:19:09.856 "trsvcid": "4420" 00:19:09.856 }, 00:19:09.856 "peer_address": { 00:19:09.856 "trtype": "TCP", 00:19:09.856 "adrfam": "IPv4", 00:19:09.856 "traddr": "10.0.0.1", 00:19:09.856 "trsvcid": "39134" 00:19:09.856 }, 00:19:09.856 "auth": { 00:19:09.856 "state": "completed", 00:19:09.856 "digest": "sha512", 00:19:09.856 "dhgroup": "null" 00:19:09.856 } 00:19:09.856 } 00:19:09.856 ]' 00:19:09.856 01:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:09.856 01:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:09.856 01:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:09.856 01:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:10.116 01:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.116 01:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.116 01:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.116 01:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.116 01:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzhjMGRiMTZmZDZlNjIzZGRhODBhMTAyMDNjMzBhMmVshvmJ: --dhchap-ctrl-secret DHHC-1:02:MmVkNjUzNjY1OGU5Zjc4OGY4NGYyYTJjMzVhMzkxMWE5OTQyZmM3ZmYyZGNhODY4jOep9A==: 00:19:10.686 01:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.686 01:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:10.686 01:19:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.686 01:19:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.686 01:19:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.686 01:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.686 01:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:10.686 01:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:10.961 01:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:19:10.962 01:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:10.962 01:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:10.962 01:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:10.962 01:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:10.962 01:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.962 01:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.962 01:19:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.962 01:19:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.962 01:19:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.962 01:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.962 01:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.228 00:19:11.228 01:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.228 01:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:11.228 01:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.228 01:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.228 01:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.228 01:19:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.228 01:19:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.228 01:19:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.228 01:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.228 { 00:19:11.228 "cntlid": 101, 00:19:11.228 "qid": 0, 00:19:11.228 "state": "enabled", 00:19:11.228 "thread": "nvmf_tgt_poll_group_000", 00:19:11.228 "listen_address": { 00:19:11.228 "trtype": "TCP", 00:19:11.228 "adrfam": "IPv4", 00:19:11.228 "traddr": "10.0.0.2", 00:19:11.228 "trsvcid": "4420" 00:19:11.228 }, 00:19:11.228 "peer_address": { 00:19:11.228 "trtype": "TCP", 00:19:11.228 "adrfam": "IPv4", 00:19:11.228 "traddr": "10.0.0.1", 00:19:11.228 "trsvcid": "39164" 00:19:11.228 }, 00:19:11.228 "auth": { 00:19:11.228 "state": "completed", 00:19:11.228 "digest": "sha512", 00:19:11.228 "dhgroup": "null" 00:19:11.228 } 00:19:11.228 } 00:19:11.228 ]' 00:19:11.228 01:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.488 01:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:11.488 01:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.488 01:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:11.488 01:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.488 01:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.488 01:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.488 01:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.748 01:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZmExZjc1NjAwMjQ3MTBjMjMzZWQ2NmZmYjIwMDVkMzU0OGE2MmI0MjFmYWM4Njg0CMe35A==: --dhchap-ctrl-secret DHHC-1:01:YWE2ZmJkNTEyNGQ5MWJlMWYxZTc2MTIyMDg3MTNiYTGb5VQY: 00:19:12.316 01:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.316 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.316 01:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:12.316 01:19:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.316 01:19:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.316 01:19:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.316 01:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:12.316 01:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:12.316 01:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:12.316 01:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:19:12.316 01:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:12.316 01:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:12.316 01:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:12.316 01:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:12.316 01:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.316 01:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:12.316 01:19:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.316 01:19:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.316 01:19:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.317 01:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:12.317 01:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:12.575 00:19:12.575 01:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:12.575 01:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.575 01:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.834 01:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.834 01:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.834 01:19:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.834 01:19:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.834 01:19:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.834 01:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:12.834 { 00:19:12.834 "cntlid": 103, 00:19:12.834 "qid": 0, 00:19:12.834 "state": "enabled", 00:19:12.834 "thread": "nvmf_tgt_poll_group_000", 00:19:12.834 "listen_address": { 00:19:12.834 "trtype": "TCP", 00:19:12.834 "adrfam": "IPv4", 00:19:12.834 "traddr": "10.0.0.2", 00:19:12.834 "trsvcid": "4420" 00:19:12.834 }, 00:19:12.834 "peer_address": { 00:19:12.834 "trtype": "TCP", 00:19:12.834 "adrfam": "IPv4", 00:19:12.834 "traddr": "10.0.0.1", 00:19:12.834 "trsvcid": "39188" 00:19:12.834 }, 00:19:12.834 "auth": { 00:19:12.834 "state": "completed", 00:19:12.834 "digest": "sha512", 00:19:12.834 "dhgroup": "null" 00:19:12.834 } 00:19:12.834 } 00:19:12.834 ]' 00:19:12.834 01:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:12.834 01:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:12.834 01:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:12.834 01:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:12.834 01:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:12.834 01:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.834 01:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.834 01:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.094 01:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTY5OGEzYzc1NjBkYWRmNjE0NWI3NTVjNDE4ODQyMTNkNWJlMmMyMDk0YmY0OGYyZmE5ZjAyODc5NjU5MWVmZkZcKJM=: 00:19:13.664 01:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.664 01:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:13.664 01:19:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.664 01:19:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.664 01:19:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.664 01:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:13.664 01:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:13.664 01:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:13.664 01:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:13.925 01:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:19:13.925 01:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.925 01:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:13.925 01:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:13.925 01:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:13.925 01:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.925 01:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.925 01:19:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.925 01:19:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.925 01:19:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.925 01:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.925 01:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.184 00:19:14.184 01:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:14.184 01:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.184 01:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.184 01:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.184 01:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.184 01:19:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.184 01:19:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.184 01:19:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.184 01:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:14.184 { 00:19:14.184 "cntlid": 105, 00:19:14.184 "qid": 0, 00:19:14.184 "state": "enabled", 00:19:14.184 "thread": "nvmf_tgt_poll_group_000", 00:19:14.184 "listen_address": { 00:19:14.184 "trtype": "TCP", 00:19:14.184 "adrfam": "IPv4", 00:19:14.184 "traddr": "10.0.0.2", 00:19:14.184 "trsvcid": "4420" 00:19:14.184 }, 00:19:14.184 "peer_address": { 00:19:14.184 "trtype": "TCP", 00:19:14.184 "adrfam": "IPv4", 00:19:14.184 "traddr": "10.0.0.1", 00:19:14.184 "trsvcid": "39214" 00:19:14.184 }, 00:19:14.184 "auth": { 00:19:14.184 "state": "completed", 00:19:14.184 "digest": "sha512", 00:19:14.184 "dhgroup": "ffdhe2048" 00:19:14.184 } 00:19:14.184 } 00:19:14.184 ]' 00:19:14.184 01:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:14.184 01:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:14.184 01:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:14.444 01:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:14.444 01:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:14.444 01:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.444 01:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.444 01:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.444 01:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NTBjMTIyZjNlZjk5NTQwMDVmNDdiNjVkOTg5ZmNmYmRkZDU1ZWEwOWE0MjhmMmJlGDSRxw==: --dhchap-ctrl-secret DHHC-1:03:MmZmNDY3OWI1YjJhNzRhMGVjZGFmZDZhMDdkNmY3ZjYzZWE1NTEwYTI5MDA1NDE3MzIwMzU0YjcwODdlMTMyMiRVQho=: 00:19:15.014 01:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.014 01:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:15.014 01:19:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.014 01:19:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.014 01:19:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.014 01:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.014 01:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:15.014 01:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:15.274 01:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:19:15.274 01:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.274 01:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:15.274 01:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:15.274 01:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:15.274 01:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.274 01:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.274 01:19:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.274 01:19:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.274 01:19:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.274 01:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.274 01:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.535 00:19:15.535 01:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.535 01:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:15.535 01:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.794 01:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.794 01:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.794 01:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.794 01:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.794 01:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.794 01:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.794 { 00:19:15.794 "cntlid": 107, 00:19:15.794 "qid": 0, 00:19:15.794 "state": "enabled", 00:19:15.794 "thread": "nvmf_tgt_poll_group_000", 00:19:15.794 "listen_address": { 00:19:15.794 "trtype": "TCP", 00:19:15.794 "adrfam": "IPv4", 00:19:15.794 "traddr": "10.0.0.2", 00:19:15.794 "trsvcid": "4420" 00:19:15.794 }, 00:19:15.794 "peer_address": { 00:19:15.794 "trtype": "TCP", 00:19:15.794 "adrfam": "IPv4", 00:19:15.794 "traddr": "10.0.0.1", 00:19:15.794 "trsvcid": "47834" 00:19:15.794 }, 00:19:15.794 "auth": { 00:19:15.794 "state": "completed", 00:19:15.794 "digest": "sha512", 00:19:15.794 "dhgroup": "ffdhe2048" 00:19:15.794 } 00:19:15.794 } 00:19:15.794 ]' 00:19:15.794 01:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.794 01:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:15.794 01:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.794 01:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:15.794 01:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:15.794 01:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.794 01:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.794 01:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.054 01:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzhjMGRiMTZmZDZlNjIzZGRhODBhMTAyMDNjMzBhMmVshvmJ: --dhchap-ctrl-secret DHHC-1:02:MmVkNjUzNjY1OGU5Zjc4OGY4NGYyYTJjMzVhMzkxMWE5OTQyZmM3ZmYyZGNhODY4jOep9A==: 00:19:16.625 01:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.625 01:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:16.625 01:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.625 01:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.625 01:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.625 01:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:16.625 01:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:16.625 01:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:16.885 01:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:19:16.885 01:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:16.885 01:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:16.885 01:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:16.885 01:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:16.885 01:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.885 01:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.885 01:19:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.885 01:19:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.885 01:19:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.885 01:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.885 01:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.885 00:19:16.885 01:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:16.885 01:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:16.885 01:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.145 01:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.145 01:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.145 01:19:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.145 01:19:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.145 01:19:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.145 01:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:17.145 { 00:19:17.145 "cntlid": 109, 00:19:17.145 "qid": 0, 00:19:17.145 "state": "enabled", 00:19:17.145 "thread": "nvmf_tgt_poll_group_000", 00:19:17.145 "listen_address": { 00:19:17.145 "trtype": "TCP", 00:19:17.145 "adrfam": "IPv4", 00:19:17.145 "traddr": "10.0.0.2", 00:19:17.145 "trsvcid": "4420" 00:19:17.145 }, 00:19:17.145 "peer_address": { 00:19:17.145 "trtype": "TCP", 00:19:17.145 "adrfam": "IPv4", 00:19:17.145 "traddr": "10.0.0.1", 00:19:17.145 "trsvcid": "47874" 00:19:17.145 }, 00:19:17.145 "auth": { 00:19:17.145 "state": "completed", 00:19:17.145 "digest": "sha512", 00:19:17.145 "dhgroup": "ffdhe2048" 00:19:17.145 } 00:19:17.145 } 00:19:17.145 ]' 00:19:17.145 01:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:17.145 01:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:17.145 01:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:17.405 01:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:17.405 01:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:17.405 01:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.405 01:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.405 01:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.405 01:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZmExZjc1NjAwMjQ3MTBjMjMzZWQ2NmZmYjIwMDVkMzU0OGE2MmI0MjFmYWM4Njg0CMe35A==: --dhchap-ctrl-secret DHHC-1:01:YWE2ZmJkNTEyNGQ5MWJlMWYxZTc2MTIyMDg3MTNiYTGb5VQY: 00:19:18.007 01:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.007 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.007 01:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:18.007 01:19:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.007 01:19:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.007 01:19:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.007 01:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:18.007 01:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:18.007 01:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:18.267 01:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:19:18.267 01:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:18.267 01:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:18.267 01:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:18.267 01:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:18.267 01:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.267 01:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:18.267 01:19:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.267 01:19:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.267 01:19:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.267 01:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:18.267 01:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:18.527 00:19:18.527 01:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.527 01:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.527 01:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.527 01:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.787 01:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.787 01:19:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.787 01:19:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.787 01:19:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.787 01:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.787 { 00:19:18.787 "cntlid": 111, 00:19:18.787 "qid": 0, 00:19:18.787 "state": "enabled", 00:19:18.787 "thread": "nvmf_tgt_poll_group_000", 00:19:18.787 "listen_address": { 00:19:18.787 "trtype": "TCP", 00:19:18.787 "adrfam": "IPv4", 00:19:18.787 "traddr": "10.0.0.2", 00:19:18.787 "trsvcid": "4420" 00:19:18.787 }, 00:19:18.787 "peer_address": { 00:19:18.787 "trtype": "TCP", 00:19:18.787 "adrfam": "IPv4", 00:19:18.787 "traddr": "10.0.0.1", 00:19:18.787 "trsvcid": "47918" 00:19:18.787 }, 00:19:18.787 "auth": { 00:19:18.787 "state": "completed", 00:19:18.787 "digest": "sha512", 00:19:18.787 "dhgroup": "ffdhe2048" 00:19:18.787 } 00:19:18.787 } 00:19:18.787 ]' 00:19:18.787 01:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.787 01:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:18.787 01:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.787 01:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:18.787 01:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.787 01:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.787 01:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.787 01:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.047 01:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTY5OGEzYzc1NjBkYWRmNjE0NWI3NTVjNDE4ODQyMTNkNWJlMmMyMDk0YmY0OGYyZmE5ZjAyODc5NjU5MWVmZkZcKJM=: 00:19:19.616 01:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.616 01:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:19.616 01:19:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.616 01:19:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.616 01:19:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.616 01:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:19.616 01:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.616 01:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:19.616 01:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:19.616 01:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:19:19.616 01:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.616 01:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:19.616 01:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:19.616 01:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:19.616 01:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.616 01:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.616 01:19:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.616 01:19:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.616 01:19:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.616 01:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.616 01:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.874 00:19:19.874 01:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:19.875 01:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:19.875 01:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.133 01:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.133 01:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.133 01:19:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.133 01:19:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.133 01:19:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.133 01:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.133 { 00:19:20.133 "cntlid": 113, 00:19:20.133 "qid": 0, 00:19:20.133 "state": "enabled", 00:19:20.133 "thread": "nvmf_tgt_poll_group_000", 00:19:20.133 "listen_address": { 00:19:20.133 "trtype": "TCP", 00:19:20.133 "adrfam": "IPv4", 00:19:20.133 "traddr": "10.0.0.2", 00:19:20.133 "trsvcid": "4420" 00:19:20.133 }, 00:19:20.133 "peer_address": { 00:19:20.133 "trtype": "TCP", 00:19:20.133 "adrfam": "IPv4", 00:19:20.133 "traddr": "10.0.0.1", 00:19:20.133 "trsvcid": "47942" 00:19:20.133 }, 00:19:20.133 "auth": { 00:19:20.133 "state": "completed", 00:19:20.133 "digest": "sha512", 00:19:20.133 "dhgroup": "ffdhe3072" 00:19:20.133 } 00:19:20.133 } 00:19:20.133 ]' 00:19:20.133 01:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.133 01:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:20.133 01:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.133 01:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:20.133 01:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.392 01:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.392 01:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.392 01:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.392 01:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NTBjMTIyZjNlZjk5NTQwMDVmNDdiNjVkOTg5ZmNmYmRkZDU1ZWEwOWE0MjhmMmJlGDSRxw==: --dhchap-ctrl-secret DHHC-1:03:MmZmNDY3OWI1YjJhNzRhMGVjZGFmZDZhMDdkNmY3ZjYzZWE1NTEwYTI5MDA1NDE3MzIwMzU0YjcwODdlMTMyMiRVQho=: 00:19:20.961 01:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.961 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.961 01:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:20.961 01:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.961 01:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.961 01:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.961 01:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:20.961 01:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:20.961 01:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:21.220 01:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:19:21.220 01:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:21.220 01:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:21.220 01:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:21.220 01:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:21.220 01:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.220 01:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.220 01:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.220 01:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.220 01:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.220 01:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.220 01:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.478 00:19:21.478 01:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:21.478 01:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:21.478 01:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.737 01:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.737 01:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.737 01:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.737 01:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.737 01:19:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.737 01:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:21.737 { 00:19:21.737 "cntlid": 115, 00:19:21.737 "qid": 0, 00:19:21.737 "state": "enabled", 00:19:21.737 "thread": "nvmf_tgt_poll_group_000", 00:19:21.737 "listen_address": { 00:19:21.737 "trtype": "TCP", 00:19:21.737 "adrfam": "IPv4", 00:19:21.737 "traddr": "10.0.0.2", 00:19:21.737 "trsvcid": "4420" 00:19:21.737 }, 00:19:21.737 "peer_address": { 00:19:21.737 "trtype": "TCP", 00:19:21.737 "adrfam": "IPv4", 00:19:21.737 "traddr": "10.0.0.1", 00:19:21.737 "trsvcid": "47976" 00:19:21.737 }, 00:19:21.737 "auth": { 00:19:21.737 "state": "completed", 00:19:21.737 "digest": "sha512", 00:19:21.737 "dhgroup": "ffdhe3072" 00:19:21.737 } 00:19:21.737 } 00:19:21.737 ]' 00:19:21.737 01:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:21.737 01:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:21.737 01:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:21.737 01:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:21.737 01:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:21.737 01:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.737 01:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.737 01:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.996 01:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzhjMGRiMTZmZDZlNjIzZGRhODBhMTAyMDNjMzBhMmVshvmJ: --dhchap-ctrl-secret DHHC-1:02:MmVkNjUzNjY1OGU5Zjc4OGY4NGYyYTJjMzVhMzkxMWE5OTQyZmM3ZmYyZGNhODY4jOep9A==: 00:19:22.565 01:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.565 01:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:22.565 01:19:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.565 01:19:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.565 01:19:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.565 01:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:22.565 01:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:22.566 01:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:22.566 01:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:19:22.566 01:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.566 01:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:22.566 01:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:22.566 01:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:22.566 01:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.566 01:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.566 01:19:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.566 01:19:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.566 01:19:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.566 01:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.566 01:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.825 00:19:22.825 01:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.825 01:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.825 01:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.085 01:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.085 01:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.085 01:19:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.085 01:19:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.085 01:19:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.085 01:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.085 { 00:19:23.085 "cntlid": 117, 00:19:23.085 "qid": 0, 00:19:23.085 "state": "enabled", 00:19:23.085 "thread": "nvmf_tgt_poll_group_000", 00:19:23.085 "listen_address": { 00:19:23.085 "trtype": "TCP", 00:19:23.085 "adrfam": "IPv4", 00:19:23.085 "traddr": "10.0.0.2", 00:19:23.085 "trsvcid": "4420" 00:19:23.085 }, 00:19:23.085 "peer_address": { 00:19:23.085 "trtype": "TCP", 00:19:23.085 "adrfam": "IPv4", 00:19:23.085 "traddr": "10.0.0.1", 00:19:23.085 "trsvcid": "48018" 00:19:23.085 }, 00:19:23.085 "auth": { 00:19:23.086 "state": "completed", 00:19:23.086 "digest": "sha512", 00:19:23.086 "dhgroup": "ffdhe3072" 00:19:23.086 } 00:19:23.086 } 00:19:23.086 ]' 00:19:23.086 01:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.086 01:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:23.086 01:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:23.086 01:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:23.086 01:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:23.345 01:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.345 01:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.345 01:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.345 01:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZmExZjc1NjAwMjQ3MTBjMjMzZWQ2NmZmYjIwMDVkMzU0OGE2MmI0MjFmYWM4Njg0CMe35A==: --dhchap-ctrl-secret DHHC-1:01:YWE2ZmJkNTEyNGQ5MWJlMWYxZTc2MTIyMDg3MTNiYTGb5VQY: 00:19:23.912 01:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.912 01:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:23.912 01:19:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.912 01:19:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.912 01:19:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.912 01:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:23.912 01:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:23.912 01:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:24.171 01:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:19:24.171 01:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.171 01:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:24.171 01:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:24.171 01:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:24.171 01:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.171 01:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:24.171 01:19:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.171 01:19:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.171 01:19:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.171 01:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:24.171 01:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:24.431 00:19:24.431 01:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.431 01:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:24.431 01:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.691 01:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.691 01:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.691 01:19:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.691 01:19:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.691 01:19:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.691 01:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:24.691 { 00:19:24.691 "cntlid": 119, 00:19:24.691 "qid": 0, 00:19:24.691 "state": "enabled", 00:19:24.691 "thread": "nvmf_tgt_poll_group_000", 00:19:24.691 "listen_address": { 00:19:24.691 "trtype": "TCP", 00:19:24.691 "adrfam": "IPv4", 00:19:24.691 "traddr": "10.0.0.2", 00:19:24.691 "trsvcid": "4420" 00:19:24.691 }, 00:19:24.691 "peer_address": { 00:19:24.691 "trtype": "TCP", 00:19:24.691 "adrfam": "IPv4", 00:19:24.691 "traddr": "10.0.0.1", 00:19:24.691 "trsvcid": "41526" 00:19:24.691 }, 00:19:24.691 "auth": { 00:19:24.691 "state": "completed", 00:19:24.691 "digest": "sha512", 00:19:24.691 "dhgroup": "ffdhe3072" 00:19:24.691 } 00:19:24.691 } 00:19:24.691 ]' 00:19:24.691 01:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:24.691 01:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:24.691 01:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:24.691 01:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:24.691 01:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:24.691 01:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.691 01:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.691 01:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.952 01:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTY5OGEzYzc1NjBkYWRmNjE0NWI3NTVjNDE4ODQyMTNkNWJlMmMyMDk0YmY0OGYyZmE5ZjAyODc5NjU5MWVmZkZcKJM=: 00:19:25.522 01:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.522 01:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:25.522 01:19:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.522 01:19:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.522 01:19:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.522 01:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:25.522 01:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:25.522 01:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:25.522 01:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:25.522 01:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:19:25.522 01:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:25.522 01:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:25.522 01:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:25.522 01:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:25.522 01:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.522 01:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.522 01:19:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.522 01:19:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.522 01:19:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.522 01:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.522 01:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.782 00:19:25.782 01:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:25.782 01:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:25.782 01:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.043 01:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.043 01:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.043 01:19:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.043 01:19:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.043 01:19:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.043 01:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.043 { 00:19:26.043 "cntlid": 121, 00:19:26.043 "qid": 0, 00:19:26.043 "state": "enabled", 00:19:26.043 "thread": "nvmf_tgt_poll_group_000", 00:19:26.043 "listen_address": { 00:19:26.043 "trtype": "TCP", 00:19:26.043 "adrfam": "IPv4", 00:19:26.043 "traddr": "10.0.0.2", 00:19:26.043 "trsvcid": "4420" 00:19:26.043 }, 00:19:26.043 "peer_address": { 00:19:26.043 "trtype": "TCP", 00:19:26.043 "adrfam": "IPv4", 00:19:26.043 "traddr": "10.0.0.1", 00:19:26.043 "trsvcid": "41542" 00:19:26.043 }, 00:19:26.043 "auth": { 00:19:26.043 "state": "completed", 00:19:26.043 "digest": "sha512", 00:19:26.043 "dhgroup": "ffdhe4096" 00:19:26.043 } 00:19:26.043 } 00:19:26.043 ]' 00:19:26.043 01:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.043 01:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:26.043 01:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:26.043 01:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:26.043 01:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:26.303 01:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.303 01:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.303 01:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.303 01:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NTBjMTIyZjNlZjk5NTQwMDVmNDdiNjVkOTg5ZmNmYmRkZDU1ZWEwOWE0MjhmMmJlGDSRxw==: --dhchap-ctrl-secret DHHC-1:03:MmZmNDY3OWI1YjJhNzRhMGVjZGFmZDZhMDdkNmY3ZjYzZWE1NTEwYTI5MDA1NDE3MzIwMzU0YjcwODdlMTMyMiRVQho=: 00:19:26.873 01:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.873 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.873 01:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:26.873 01:19:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.873 01:19:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.873 01:19:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.873 01:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:26.873 01:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:26.873 01:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:27.133 01:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:19:27.133 01:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:27.133 01:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:27.133 01:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:27.133 01:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:27.133 01:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.133 01:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.133 01:19:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.133 01:19:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.133 01:19:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.133 01:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.133 01:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.393 00:19:27.393 01:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:27.393 01:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:27.393 01:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.653 01:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.653 01:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.653 01:19:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.653 01:19:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.653 01:19:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.653 01:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:27.653 { 00:19:27.653 "cntlid": 123, 00:19:27.653 "qid": 0, 00:19:27.653 "state": "enabled", 00:19:27.653 "thread": "nvmf_tgt_poll_group_000", 00:19:27.653 "listen_address": { 00:19:27.653 "trtype": "TCP", 00:19:27.653 "adrfam": "IPv4", 00:19:27.653 "traddr": "10.0.0.2", 00:19:27.653 "trsvcid": "4420" 00:19:27.653 }, 00:19:27.653 "peer_address": { 00:19:27.653 "trtype": "TCP", 00:19:27.653 "adrfam": "IPv4", 00:19:27.653 "traddr": "10.0.0.1", 00:19:27.653 "trsvcid": "41580" 00:19:27.653 }, 00:19:27.653 "auth": { 00:19:27.653 "state": "completed", 00:19:27.653 "digest": "sha512", 00:19:27.653 "dhgroup": "ffdhe4096" 00:19:27.653 } 00:19:27.653 } 00:19:27.653 ]' 00:19:27.653 01:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:27.653 01:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:27.653 01:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.653 01:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:27.653 01:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:27.653 01:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.653 01:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.653 01:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.913 01:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzhjMGRiMTZmZDZlNjIzZGRhODBhMTAyMDNjMzBhMmVshvmJ: --dhchap-ctrl-secret DHHC-1:02:MmVkNjUzNjY1OGU5Zjc4OGY4NGYyYTJjMzVhMzkxMWE5OTQyZmM3ZmYyZGNhODY4jOep9A==: 00:19:28.483 01:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.483 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.483 01:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:28.483 01:19:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.483 01:19:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.483 01:19:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.483 01:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.483 01:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:28.483 01:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:28.483 01:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:19:28.483 01:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.483 01:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:28.483 01:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:28.483 01:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:28.483 01:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.483 01:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.483 01:19:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.483 01:19:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.483 01:19:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.483 01:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.483 01:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.743 00:19:28.743 01:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:28.743 01:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:28.743 01:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.001 01:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.001 01:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.001 01:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.001 01:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.001 01:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.001 01:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.001 { 00:19:29.001 "cntlid": 125, 00:19:29.001 "qid": 0, 00:19:29.001 "state": "enabled", 00:19:29.001 "thread": "nvmf_tgt_poll_group_000", 00:19:29.001 "listen_address": { 00:19:29.001 "trtype": "TCP", 00:19:29.001 "adrfam": "IPv4", 00:19:29.001 "traddr": "10.0.0.2", 00:19:29.001 "trsvcid": "4420" 00:19:29.001 }, 00:19:29.001 "peer_address": { 00:19:29.001 "trtype": "TCP", 00:19:29.001 "adrfam": "IPv4", 00:19:29.001 "traddr": "10.0.0.1", 00:19:29.001 "trsvcid": "41612" 00:19:29.001 }, 00:19:29.001 "auth": { 00:19:29.001 "state": "completed", 00:19:29.001 "digest": "sha512", 00:19:29.001 "dhgroup": "ffdhe4096" 00:19:29.001 } 00:19:29.002 } 00:19:29.002 ]' 00:19:29.002 01:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.002 01:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:29.002 01:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.002 01:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:29.002 01:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.261 01:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.261 01:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.261 01:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.261 01:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZmExZjc1NjAwMjQ3MTBjMjMzZWQ2NmZmYjIwMDVkMzU0OGE2MmI0MjFmYWM4Njg0CMe35A==: --dhchap-ctrl-secret DHHC-1:01:YWE2ZmJkNTEyNGQ5MWJlMWYxZTc2MTIyMDg3MTNiYTGb5VQY: 00:19:29.832 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.832 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:29.832 01:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.832 01:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.832 01:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.832 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:29.832 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:29.832 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:30.092 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:19:30.092 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.092 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:30.092 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:30.092 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:30.092 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.092 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:30.092 01:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.092 01:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.092 01:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.092 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:30.092 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:30.352 00:19:30.352 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:30.353 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:30.353 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.613 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.613 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.613 01:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.613 01:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.613 01:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.613 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:30.613 { 00:19:30.613 "cntlid": 127, 00:19:30.613 "qid": 0, 00:19:30.613 "state": "enabled", 00:19:30.613 "thread": "nvmf_tgt_poll_group_000", 00:19:30.613 "listen_address": { 00:19:30.613 "trtype": "TCP", 00:19:30.613 "adrfam": "IPv4", 00:19:30.613 "traddr": "10.0.0.2", 00:19:30.613 "trsvcid": "4420" 00:19:30.613 }, 00:19:30.613 "peer_address": { 00:19:30.613 "trtype": "TCP", 00:19:30.613 "adrfam": "IPv4", 00:19:30.613 "traddr": "10.0.0.1", 00:19:30.613 "trsvcid": "41636" 00:19:30.613 }, 00:19:30.613 "auth": { 00:19:30.613 "state": "completed", 00:19:30.613 "digest": "sha512", 00:19:30.613 "dhgroup": "ffdhe4096" 00:19:30.613 } 00:19:30.613 } 00:19:30.613 ]' 00:19:30.613 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:30.613 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:30.613 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:30.613 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:30.613 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:30.613 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.613 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.613 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.874 01:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTY5OGEzYzc1NjBkYWRmNjE0NWI3NTVjNDE4ODQyMTNkNWJlMmMyMDk0YmY0OGYyZmE5ZjAyODc5NjU5MWVmZkZcKJM=: 00:19:31.443 01:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.443 01:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:31.443 01:19:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.443 01:19:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.443 01:19:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.443 01:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:31.443 01:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.443 01:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:31.443 01:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:31.443 01:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:19:31.443 01:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.443 01:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:31.443 01:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:31.443 01:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:31.443 01:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.443 01:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.443 01:19:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.443 01:19:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.443 01:19:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.443 01:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.443 01:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.014 00:19:32.014 01:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:32.014 01:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.014 01:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:32.014 01:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.014 01:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.014 01:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.014 01:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.014 01:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.014 01:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:32.014 { 00:19:32.014 "cntlid": 129, 00:19:32.014 "qid": 0, 00:19:32.014 "state": "enabled", 00:19:32.014 "thread": "nvmf_tgt_poll_group_000", 00:19:32.014 "listen_address": { 00:19:32.014 "trtype": "TCP", 00:19:32.014 "adrfam": "IPv4", 00:19:32.014 "traddr": "10.0.0.2", 00:19:32.014 "trsvcid": "4420" 00:19:32.014 }, 00:19:32.014 "peer_address": { 00:19:32.014 "trtype": "TCP", 00:19:32.014 "adrfam": "IPv4", 00:19:32.014 "traddr": "10.0.0.1", 00:19:32.014 "trsvcid": "41658" 00:19:32.014 }, 00:19:32.014 "auth": { 00:19:32.014 "state": "completed", 00:19:32.014 "digest": "sha512", 00:19:32.014 "dhgroup": "ffdhe6144" 00:19:32.014 } 00:19:32.014 } 00:19:32.014 ]' 00:19:32.014 01:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:32.014 01:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:32.014 01:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:32.306 01:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:32.306 01:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:32.306 01:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.306 01:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.306 01:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.306 01:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NTBjMTIyZjNlZjk5NTQwMDVmNDdiNjVkOTg5ZmNmYmRkZDU1ZWEwOWE0MjhmMmJlGDSRxw==: --dhchap-ctrl-secret DHHC-1:03:MmZmNDY3OWI1YjJhNzRhMGVjZGFmZDZhMDdkNmY3ZjYzZWE1NTEwYTI5MDA1NDE3MzIwMzU0YjcwODdlMTMyMiRVQho=: 00:19:32.895 01:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.895 01:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:32.895 01:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.895 01:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.895 01:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.895 01:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:32.895 01:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:32.896 01:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:33.155 01:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:19:33.155 01:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.155 01:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:33.155 01:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:33.155 01:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:33.155 01:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.155 01:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.155 01:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.155 01:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.155 01:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.155 01:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.155 01:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.414 00:19:33.414 01:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.414 01:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.414 01:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.673 01:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.673 01:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.673 01:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.673 01:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.673 01:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.673 01:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.673 { 00:19:33.673 "cntlid": 131, 00:19:33.673 "qid": 0, 00:19:33.673 "state": "enabled", 00:19:33.673 "thread": "nvmf_tgt_poll_group_000", 00:19:33.673 "listen_address": { 00:19:33.673 "trtype": "TCP", 00:19:33.673 "adrfam": "IPv4", 00:19:33.673 "traddr": "10.0.0.2", 00:19:33.673 "trsvcid": "4420" 00:19:33.673 }, 00:19:33.673 "peer_address": { 00:19:33.673 "trtype": "TCP", 00:19:33.673 "adrfam": "IPv4", 00:19:33.673 "traddr": "10.0.0.1", 00:19:33.673 "trsvcid": "41684" 00:19:33.673 }, 00:19:33.673 "auth": { 00:19:33.673 "state": "completed", 00:19:33.673 "digest": "sha512", 00:19:33.673 "dhgroup": "ffdhe6144" 00:19:33.673 } 00:19:33.673 } 00:19:33.673 ]' 00:19:33.673 01:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.673 01:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:33.673 01:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.673 01:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:33.673 01:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.673 01:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.673 01:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.673 01:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.933 01:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzhjMGRiMTZmZDZlNjIzZGRhODBhMTAyMDNjMzBhMmVshvmJ: --dhchap-ctrl-secret DHHC-1:02:MmVkNjUzNjY1OGU5Zjc4OGY4NGYyYTJjMzVhMzkxMWE5OTQyZmM3ZmYyZGNhODY4jOep9A==: 00:19:34.503 01:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.503 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.503 01:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:34.503 01:19:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.503 01:19:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.503 01:19:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.503 01:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:34.503 01:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:34.503 01:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:34.503 01:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:19:34.503 01:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:34.503 01:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:34.503 01:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:34.503 01:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:34.503 01:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.503 01:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.503 01:19:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.503 01:19:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.503 01:19:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.503 01:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.503 01:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.073 00:19:35.073 01:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.073 01:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:35.073 01:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.073 01:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.073 01:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.073 01:19:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.073 01:19:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.073 01:19:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.073 01:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:35.073 { 00:19:35.073 "cntlid": 133, 00:19:35.073 "qid": 0, 00:19:35.073 "state": "enabled", 00:19:35.073 "thread": "nvmf_tgt_poll_group_000", 00:19:35.073 "listen_address": { 00:19:35.073 "trtype": "TCP", 00:19:35.073 "adrfam": "IPv4", 00:19:35.073 "traddr": "10.0.0.2", 00:19:35.073 "trsvcid": "4420" 00:19:35.073 }, 00:19:35.073 "peer_address": { 00:19:35.073 "trtype": "TCP", 00:19:35.073 "adrfam": "IPv4", 00:19:35.073 "traddr": "10.0.0.1", 00:19:35.073 "trsvcid": "60846" 00:19:35.073 }, 00:19:35.073 "auth": { 00:19:35.073 "state": "completed", 00:19:35.073 "digest": "sha512", 00:19:35.073 "dhgroup": "ffdhe6144" 00:19:35.073 } 00:19:35.073 } 00:19:35.073 ]' 00:19:35.073 01:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:35.073 01:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:35.333 01:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:35.333 01:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:35.333 01:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:35.333 01:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.333 01:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.333 01:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.333 01:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZmExZjc1NjAwMjQ3MTBjMjMzZWQ2NmZmYjIwMDVkMzU0OGE2MmI0MjFmYWM4Njg0CMe35A==: --dhchap-ctrl-secret DHHC-1:01:YWE2ZmJkNTEyNGQ5MWJlMWYxZTc2MTIyMDg3MTNiYTGb5VQY: 00:19:35.902 01:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.902 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.902 01:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:35.902 01:19:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.902 01:19:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.902 01:19:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.902 01:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:35.902 01:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:35.902 01:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:36.162 01:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:19:36.162 01:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:36.162 01:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:36.162 01:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:36.162 01:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:36.162 01:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.162 01:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:36.162 01:19:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.162 01:19:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.162 01:19:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.163 01:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:36.163 01:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:36.423 00:19:36.423 01:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.423 01:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.423 01:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.683 01:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.683 01:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.683 01:19:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.683 01:19:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.683 01:19:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.683 01:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.683 { 00:19:36.683 "cntlid": 135, 00:19:36.683 "qid": 0, 00:19:36.683 "state": "enabled", 00:19:36.683 "thread": "nvmf_tgt_poll_group_000", 00:19:36.683 "listen_address": { 00:19:36.683 "trtype": "TCP", 00:19:36.683 "adrfam": "IPv4", 00:19:36.683 "traddr": "10.0.0.2", 00:19:36.683 "trsvcid": "4420" 00:19:36.683 }, 00:19:36.683 "peer_address": { 00:19:36.683 "trtype": "TCP", 00:19:36.683 "adrfam": "IPv4", 00:19:36.683 "traddr": "10.0.0.1", 00:19:36.683 "trsvcid": "60858" 00:19:36.683 }, 00:19:36.683 "auth": { 00:19:36.683 "state": "completed", 00:19:36.683 "digest": "sha512", 00:19:36.683 "dhgroup": "ffdhe6144" 00:19:36.683 } 00:19:36.683 } 00:19:36.683 ]' 00:19:36.683 01:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.683 01:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:36.683 01:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.943 01:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:36.943 01:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.943 01:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.943 01:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.943 01:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.943 01:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTY5OGEzYzc1NjBkYWRmNjE0NWI3NTVjNDE4ODQyMTNkNWJlMmMyMDk0YmY0OGYyZmE5ZjAyODc5NjU5MWVmZkZcKJM=: 00:19:37.513 01:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.513 01:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:37.513 01:19:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.513 01:19:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.513 01:19:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.513 01:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:37.513 01:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.513 01:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:37.513 01:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:37.773 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:19:37.773 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:37.773 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:37.773 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:37.773 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:37.773 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.773 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.773 01:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.773 01:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.773 01:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.773 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.774 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.346 00:19:38.346 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.346 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.346 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.346 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.346 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.346 01:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.346 01:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.346 01:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.346 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.346 { 00:19:38.346 "cntlid": 137, 00:19:38.346 "qid": 0, 00:19:38.346 "state": "enabled", 00:19:38.346 "thread": "nvmf_tgt_poll_group_000", 00:19:38.346 "listen_address": { 00:19:38.346 "trtype": "TCP", 00:19:38.346 "adrfam": "IPv4", 00:19:38.346 "traddr": "10.0.0.2", 00:19:38.346 "trsvcid": "4420" 00:19:38.346 }, 00:19:38.346 "peer_address": { 00:19:38.346 "trtype": "TCP", 00:19:38.346 "adrfam": "IPv4", 00:19:38.346 "traddr": "10.0.0.1", 00:19:38.346 "trsvcid": "60888" 00:19:38.346 }, 00:19:38.346 "auth": { 00:19:38.346 "state": "completed", 00:19:38.346 "digest": "sha512", 00:19:38.346 "dhgroup": "ffdhe8192" 00:19:38.346 } 00:19:38.346 } 00:19:38.346 ]' 00:19:38.346 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.604 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:38.604 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.605 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:38.605 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.605 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.605 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.605 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.864 01:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NTBjMTIyZjNlZjk5NTQwMDVmNDdiNjVkOTg5ZmNmYmRkZDU1ZWEwOWE0MjhmMmJlGDSRxw==: --dhchap-ctrl-secret DHHC-1:03:MmZmNDY3OWI1YjJhNzRhMGVjZGFmZDZhMDdkNmY3ZjYzZWE1NTEwYTI5MDA1NDE3MzIwMzU0YjcwODdlMTMyMiRVQho=: 00:19:39.434 01:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.434 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.434 01:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:39.434 01:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.434 01:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.434 01:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.434 01:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.434 01:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:39.434 01:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:39.434 01:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:19:39.434 01:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.434 01:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:39.434 01:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:39.434 01:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:39.434 01:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.434 01:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.434 01:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.434 01:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.434 01:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.434 01:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.434 01:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.003 00:19:40.003 01:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:40.003 01:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:40.003 01:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.262 01:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.262 01:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.262 01:20:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.262 01:20:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.262 01:20:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.262 01:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:40.262 { 00:19:40.262 "cntlid": 139, 00:19:40.262 "qid": 0, 00:19:40.262 "state": "enabled", 00:19:40.262 "thread": "nvmf_tgt_poll_group_000", 00:19:40.262 "listen_address": { 00:19:40.262 "trtype": "TCP", 00:19:40.262 "adrfam": "IPv4", 00:19:40.262 "traddr": "10.0.0.2", 00:19:40.262 "trsvcid": "4420" 00:19:40.262 }, 00:19:40.262 "peer_address": { 00:19:40.262 "trtype": "TCP", 00:19:40.262 "adrfam": "IPv4", 00:19:40.262 "traddr": "10.0.0.1", 00:19:40.262 "trsvcid": "60910" 00:19:40.262 }, 00:19:40.262 "auth": { 00:19:40.262 "state": "completed", 00:19:40.262 "digest": "sha512", 00:19:40.262 "dhgroup": "ffdhe8192" 00:19:40.262 } 00:19:40.262 } 00:19:40.262 ]' 00:19:40.262 01:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.262 01:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:40.262 01:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.262 01:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:40.262 01:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.262 01:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.262 01:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.262 01:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.522 01:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzhjMGRiMTZmZDZlNjIzZGRhODBhMTAyMDNjMzBhMmVshvmJ: --dhchap-ctrl-secret DHHC-1:02:MmVkNjUzNjY1OGU5Zjc4OGY4NGYyYTJjMzVhMzkxMWE5OTQyZmM3ZmYyZGNhODY4jOep9A==: 00:19:41.091 01:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.091 01:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:41.091 01:20:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.091 01:20:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.091 01:20:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.091 01:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:41.091 01:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:41.091 01:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:41.091 01:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:19:41.091 01:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:41.091 01:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:41.091 01:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:41.091 01:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:41.091 01:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.091 01:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.091 01:20:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.091 01:20:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.091 01:20:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.091 01:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.091 01:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.658 00:19:41.658 01:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.658 01:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.658 01:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.918 01:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.918 01:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.918 01:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.918 01:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.918 01:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.918 01:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.918 { 00:19:41.918 "cntlid": 141, 00:19:41.918 "qid": 0, 00:19:41.918 "state": "enabled", 00:19:41.918 "thread": "nvmf_tgt_poll_group_000", 00:19:41.918 "listen_address": { 00:19:41.918 "trtype": "TCP", 00:19:41.918 "adrfam": "IPv4", 00:19:41.918 "traddr": "10.0.0.2", 00:19:41.918 "trsvcid": "4420" 00:19:41.918 }, 00:19:41.918 "peer_address": { 00:19:41.918 "trtype": "TCP", 00:19:41.918 "adrfam": "IPv4", 00:19:41.918 "traddr": "10.0.0.1", 00:19:41.918 "trsvcid": "60930" 00:19:41.918 }, 00:19:41.918 "auth": { 00:19:41.918 "state": "completed", 00:19:41.918 "digest": "sha512", 00:19:41.918 "dhgroup": "ffdhe8192" 00:19:41.918 } 00:19:41.918 } 00:19:41.918 ]' 00:19:41.918 01:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:41.918 01:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:41.918 01:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:41.918 01:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:41.918 01:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:41.918 01:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.918 01:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.918 01:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.178 01:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZmExZjc1NjAwMjQ3MTBjMjMzZWQ2NmZmYjIwMDVkMzU0OGE2MmI0MjFmYWM4Njg0CMe35A==: --dhchap-ctrl-secret DHHC-1:01:YWE2ZmJkNTEyNGQ5MWJlMWYxZTc2MTIyMDg3MTNiYTGb5VQY: 00:19:42.748 01:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.748 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.748 01:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:42.748 01:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.748 01:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.748 01:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.748 01:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.748 01:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:42.748 01:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:43.009 01:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:19:43.009 01:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:43.009 01:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:43.009 01:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:43.009 01:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:43.009 01:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.009 01:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:43.009 01:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.009 01:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.009 01:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.009 01:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:43.009 01:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:43.269 00:19:43.269 01:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:43.269 01:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:43.269 01:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.529 01:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.529 01:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.529 01:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.529 01:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.529 01:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.529 01:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:43.529 { 00:19:43.529 "cntlid": 143, 00:19:43.529 "qid": 0, 00:19:43.529 "state": "enabled", 00:19:43.529 "thread": "nvmf_tgt_poll_group_000", 00:19:43.529 "listen_address": { 00:19:43.529 "trtype": "TCP", 00:19:43.529 "adrfam": "IPv4", 00:19:43.529 "traddr": "10.0.0.2", 00:19:43.529 "trsvcid": "4420" 00:19:43.529 }, 00:19:43.529 "peer_address": { 00:19:43.529 "trtype": "TCP", 00:19:43.529 "adrfam": "IPv4", 00:19:43.529 "traddr": "10.0.0.1", 00:19:43.529 "trsvcid": "60954" 00:19:43.529 }, 00:19:43.529 "auth": { 00:19:43.529 "state": "completed", 00:19:43.529 "digest": "sha512", 00:19:43.529 "dhgroup": "ffdhe8192" 00:19:43.529 } 00:19:43.529 } 00:19:43.529 ]' 00:19:43.529 01:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:43.529 01:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:43.529 01:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:43.529 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:43.529 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:43.789 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.789 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.789 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.789 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTY5OGEzYzc1NjBkYWRmNjE0NWI3NTVjNDE4ODQyMTNkNWJlMmMyMDk0YmY0OGYyZmE5ZjAyODc5NjU5MWVmZkZcKJM=: 00:19:44.359 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.359 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:44.359 01:20:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.359 01:20:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.359 01:20:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.359 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:44.359 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:19:44.359 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:44.359 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:44.359 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:44.359 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:44.619 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:19:44.619 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:44.619 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:44.619 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:44.619 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:44.619 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.619 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.619 01:20:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.619 01:20:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.619 01:20:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.619 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.619 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.189 00:19:45.189 01:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:45.189 01:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.189 01:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:45.189 01:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.189 01:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.189 01:20:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.189 01:20:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.189 01:20:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.189 01:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:45.189 { 00:19:45.189 "cntlid": 145, 00:19:45.189 "qid": 0, 00:19:45.189 "state": "enabled", 00:19:45.189 "thread": "nvmf_tgt_poll_group_000", 00:19:45.189 "listen_address": { 00:19:45.189 "trtype": "TCP", 00:19:45.189 "adrfam": "IPv4", 00:19:45.189 "traddr": "10.0.0.2", 00:19:45.189 "trsvcid": "4420" 00:19:45.189 }, 00:19:45.189 "peer_address": { 00:19:45.189 "trtype": "TCP", 00:19:45.189 "adrfam": "IPv4", 00:19:45.189 "traddr": "10.0.0.1", 00:19:45.189 "trsvcid": "56312" 00:19:45.189 }, 00:19:45.189 "auth": { 00:19:45.189 "state": "completed", 00:19:45.189 "digest": "sha512", 00:19:45.189 "dhgroup": "ffdhe8192" 00:19:45.189 } 00:19:45.189 } 00:19:45.189 ]' 00:19:45.189 01:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.450 01:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:45.450 01:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.450 01:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:45.450 01:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:45.450 01:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.450 01:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.450 01:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.709 01:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NTBjMTIyZjNlZjk5NTQwMDVmNDdiNjVkOTg5ZmNmYmRkZDU1ZWEwOWE0MjhmMmJlGDSRxw==: --dhchap-ctrl-secret DHHC-1:03:MmZmNDY3OWI1YjJhNzRhMGVjZGFmZDZhMDdkNmY3ZjYzZWE1NTEwYTI5MDA1NDE3MzIwMzU0YjcwODdlMTMyMiRVQho=: 00:19:46.279 01:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.279 01:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:46.279 01:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.279 01:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.279 01:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.279 01:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:19:46.279 01:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.279 01:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.279 01:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.279 01:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:46.279 01:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:46.279 01:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:46.279 01:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:46.279 01:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:46.279 01:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:46.279 01:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:46.279 01:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:46.279 01:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:46.540 request: 00:19:46.540 { 00:19:46.540 "name": "nvme0", 00:19:46.540 "trtype": "tcp", 00:19:46.540 "traddr": "10.0.0.2", 00:19:46.540 "adrfam": "ipv4", 00:19:46.540 "trsvcid": "4420", 00:19:46.540 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:46.540 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:46.540 "prchk_reftag": false, 00:19:46.540 "prchk_guard": false, 00:19:46.540 "hdgst": false, 00:19:46.540 "ddgst": false, 00:19:46.540 "dhchap_key": "key2", 00:19:46.540 "method": "bdev_nvme_attach_controller", 00:19:46.540 "req_id": 1 00:19:46.540 } 00:19:46.540 Got JSON-RPC error response 00:19:46.540 response: 00:19:46.540 { 00:19:46.540 "code": -5, 00:19:46.540 "message": "Input/output error" 00:19:46.540 } 00:19:46.540 01:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:46.540 01:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:46.540 01:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:46.540 01:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:46.540 01:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:46.540 01:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.540 01:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.540 01:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.540 01:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.540 01:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.540 01:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.540 01:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.540 01:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:46.540 01:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:46.540 01:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:46.540 01:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:46.540 01:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:46.540 01:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:46.540 01:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:46.540 01:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:46.540 01:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:47.177 request: 00:19:47.177 { 00:19:47.177 "name": "nvme0", 00:19:47.177 "trtype": "tcp", 00:19:47.177 "traddr": "10.0.0.2", 00:19:47.177 "adrfam": "ipv4", 00:19:47.177 "trsvcid": "4420", 00:19:47.177 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:47.177 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:47.177 "prchk_reftag": false, 00:19:47.177 "prchk_guard": false, 00:19:47.177 "hdgst": false, 00:19:47.177 "ddgst": false, 00:19:47.177 "dhchap_key": "key1", 00:19:47.177 "dhchap_ctrlr_key": "ckey2", 00:19:47.177 "method": "bdev_nvme_attach_controller", 00:19:47.177 "req_id": 1 00:19:47.177 } 00:19:47.177 Got JSON-RPC error response 00:19:47.177 response: 00:19:47.177 { 00:19:47.177 "code": -5, 00:19:47.177 "message": "Input/output error" 00:19:47.177 } 00:19:47.177 01:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:47.177 01:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:47.177 01:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:47.177 01:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:47.177 01:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:47.177 01:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.177 01:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.177 01:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.177 01:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:19:47.177 01:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.177 01:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.177 01:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.177 01:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.177 01:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:47.177 01:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.177 01:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:47.177 01:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:47.177 01:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:47.177 01:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:47.177 01:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.177 01:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.438 request: 00:19:47.438 { 00:19:47.438 "name": "nvme0", 00:19:47.438 "trtype": "tcp", 00:19:47.438 "traddr": "10.0.0.2", 00:19:47.438 "adrfam": "ipv4", 00:19:47.438 "trsvcid": "4420", 00:19:47.438 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:47.438 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:47.438 "prchk_reftag": false, 00:19:47.438 "prchk_guard": false, 00:19:47.438 "hdgst": false, 00:19:47.438 "ddgst": false, 00:19:47.438 "dhchap_key": "key1", 00:19:47.438 "dhchap_ctrlr_key": "ckey1", 00:19:47.438 "method": "bdev_nvme_attach_controller", 00:19:47.438 "req_id": 1 00:19:47.438 } 00:19:47.438 Got JSON-RPC error response 00:19:47.438 response: 00:19:47.438 { 00:19:47.438 "code": -5, 00:19:47.438 "message": "Input/output error" 00:19:47.438 } 00:19:47.438 01:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:47.438 01:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:47.438 01:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:47.438 01:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:47.438 01:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:47.438 01:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.438 01:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.438 01:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.438 01:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 895973 00:19:47.438 01:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 895973 ']' 00:19:47.438 01:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 895973 00:19:47.438 01:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:47.438 01:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:47.438 01:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 895973 00:19:47.438 01:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:47.438 01:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:47.438 01:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 895973' 00:19:47.438 killing process with pid 895973 00:19:47.438 01:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 895973 00:19:47.438 01:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 895973 00:19:47.697 01:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:47.697 01:20:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:47.697 01:20:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:47.697 01:20:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.697 01:20:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=916104 00:19:47.697 01:20:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:47.697 01:20:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 916104 00:19:47.697 01:20:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 916104 ']' 00:19:47.697 01:20:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.697 01:20:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:47.697 01:20:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.697 01:20:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:47.697 01:20:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.636 01:20:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:48.636 01:20:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:48.636 01:20:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:48.636 01:20:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:48.636 01:20:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.636 01:20:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:48.636 01:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:48.636 01:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 916104 00:19:48.636 01:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 916104 ']' 00:19:48.636 01:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.636 01:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:48.636 01:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.636 01:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:48.636 01:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.896 01:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:48.896 01:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:48.896 01:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:19:48.896 01:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.896 01:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.896 01:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.896 01:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:19:48.896 01:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:48.896 01:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:48.896 01:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:48.896 01:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:48.896 01:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.896 01:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:48.896 01:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.896 01:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.896 01:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.896 01:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:48.896 01:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:49.463 00:19:49.463 01:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:49.463 01:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.463 01:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.723 01:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.723 01:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.723 01:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.723 01:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.723 01:20:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.723 01:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.723 { 00:19:49.723 "cntlid": 1, 00:19:49.723 "qid": 0, 00:19:49.723 "state": "enabled", 00:19:49.723 "thread": "nvmf_tgt_poll_group_000", 00:19:49.723 "listen_address": { 00:19:49.723 "trtype": "TCP", 00:19:49.723 "adrfam": "IPv4", 00:19:49.723 "traddr": "10.0.0.2", 00:19:49.723 "trsvcid": "4420" 00:19:49.723 }, 00:19:49.723 "peer_address": { 00:19:49.723 "trtype": "TCP", 00:19:49.723 "adrfam": "IPv4", 00:19:49.723 "traddr": "10.0.0.1", 00:19:49.723 "trsvcid": "56380" 00:19:49.723 }, 00:19:49.723 "auth": { 00:19:49.723 "state": "completed", 00:19:49.723 "digest": "sha512", 00:19:49.723 "dhgroup": "ffdhe8192" 00:19:49.723 } 00:19:49.723 } 00:19:49.723 ]' 00:19:49.723 01:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:49.723 01:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:49.723 01:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.723 01:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:49.723 01:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:49.723 01:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.723 01:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.723 01:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.981 01:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MTY5OGEzYzc1NjBkYWRmNjE0NWI3NTVjNDE4ODQyMTNkNWJlMmMyMDk0YmY0OGYyZmE5ZjAyODc5NjU5MWVmZkZcKJM=: 00:19:50.550 01:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.550 01:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:50.550 01:20:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.550 01:20:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.551 01:20:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.551 01:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:50.551 01:20:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.551 01:20:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.551 01:20:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.551 01:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:50.551 01:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:50.551 01:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:50.551 01:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:50.551 01:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:50.551 01:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:50.551 01:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:50.551 01:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:50.551 01:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:50.551 01:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:50.551 01:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:50.810 request: 00:19:50.810 { 00:19:50.810 "name": "nvme0", 00:19:50.810 "trtype": "tcp", 00:19:50.810 "traddr": "10.0.0.2", 00:19:50.810 "adrfam": "ipv4", 00:19:50.810 "trsvcid": "4420", 00:19:50.810 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:50.810 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:50.810 "prchk_reftag": false, 00:19:50.810 "prchk_guard": false, 00:19:50.810 "hdgst": false, 00:19:50.810 "ddgst": false, 00:19:50.810 "dhchap_key": "key3", 00:19:50.810 "method": "bdev_nvme_attach_controller", 00:19:50.810 "req_id": 1 00:19:50.810 } 00:19:50.810 Got JSON-RPC error response 00:19:50.810 response: 00:19:50.810 { 00:19:50.810 "code": -5, 00:19:50.810 "message": "Input/output error" 00:19:50.810 } 00:19:50.810 01:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:50.810 01:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:50.810 01:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:50.810 01:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:50.810 01:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:19:50.810 01:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:19:50.810 01:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:50.810 01:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:51.070 01:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.070 01:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:51.070 01:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.070 01:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:51.070 01:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:51.070 01:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:51.070 01:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:51.070 01:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.070 01:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.331 request: 00:19:51.331 { 00:19:51.331 "name": "nvme0", 00:19:51.331 "trtype": "tcp", 00:19:51.331 "traddr": "10.0.0.2", 00:19:51.331 "adrfam": "ipv4", 00:19:51.331 "trsvcid": "4420", 00:19:51.331 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:51.331 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:51.331 "prchk_reftag": false, 00:19:51.331 "prchk_guard": false, 00:19:51.331 "hdgst": false, 00:19:51.331 "ddgst": false, 00:19:51.331 "dhchap_key": "key3", 00:19:51.331 "method": "bdev_nvme_attach_controller", 00:19:51.331 "req_id": 1 00:19:51.331 } 00:19:51.331 Got JSON-RPC error response 00:19:51.331 response: 00:19:51.331 { 00:19:51.331 "code": -5, 00:19:51.331 "message": "Input/output error" 00:19:51.331 } 00:19:51.331 01:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:51.331 01:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:51.331 01:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:51.331 01:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:51.331 01:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:51.331 01:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:19:51.331 01:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:51.331 01:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:51.331 01:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:51.331 01:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:51.331 01:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:51.331 01:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.331 01:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.331 01:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.331 01:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:51.331 01:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.331 01:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.331 01:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.331 01:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:51.331 01:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:51.331 01:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:51.331 01:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:51.332 01:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:51.332 01:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:51.332 01:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:51.332 01:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:51.332 01:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:51.591 request: 00:19:51.591 { 00:19:51.591 "name": "nvme0", 00:19:51.591 "trtype": "tcp", 00:19:51.591 "traddr": "10.0.0.2", 00:19:51.591 "adrfam": "ipv4", 00:19:51.591 "trsvcid": "4420", 00:19:51.591 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:51.591 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:51.591 "prchk_reftag": false, 00:19:51.591 "prchk_guard": false, 00:19:51.591 "hdgst": false, 00:19:51.591 "ddgst": false, 00:19:51.591 "dhchap_key": "key0", 00:19:51.591 "dhchap_ctrlr_key": "key1", 00:19:51.591 "method": "bdev_nvme_attach_controller", 00:19:51.591 "req_id": 1 00:19:51.591 } 00:19:51.591 Got JSON-RPC error response 00:19:51.591 response: 00:19:51.591 { 00:19:51.591 "code": -5, 00:19:51.591 "message": "Input/output error" 00:19:51.591 } 00:19:51.591 01:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:51.591 01:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:51.591 01:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:51.591 01:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:51.591 01:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:51.591 01:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:51.850 00:19:51.850 01:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:19:51.850 01:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.850 01:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:19:52.109 01:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.109 01:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.109 01:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.109 01:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:19:52.109 01:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:19:52.109 01:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 896014 00:19:52.110 01:20:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 896014 ']' 00:19:52.110 01:20:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 896014 00:19:52.110 01:20:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:52.110 01:20:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:52.110 01:20:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 896014 00:19:52.369 01:20:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:52.369 01:20:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:52.369 01:20:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 896014' 00:19:52.369 killing process with pid 896014 00:19:52.369 01:20:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 896014 00:19:52.369 01:20:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 896014 00:19:52.628 01:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:52.628 01:20:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:52.628 01:20:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:19:52.628 01:20:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:52.628 01:20:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:19:52.628 01:20:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:52.628 01:20:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:52.628 rmmod nvme_tcp 00:19:52.628 rmmod nvme_fabrics 00:19:52.628 rmmod nvme_keyring 00:19:52.628 01:20:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:52.628 01:20:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:19:52.628 01:20:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:19:52.628 01:20:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 916104 ']' 00:19:52.628 01:20:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 916104 00:19:52.628 01:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 916104 ']' 00:19:52.628 01:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 916104 00:19:52.628 01:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:52.628 01:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:52.628 01:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 916104 00:19:52.628 01:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:52.628 01:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:52.628 01:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 916104' 00:19:52.628 killing process with pid 916104 00:19:52.628 01:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 916104 00:19:52.628 01:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 916104 00:19:52.887 01:20:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:52.887 01:20:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:52.887 01:20:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:52.887 01:20:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:52.887 01:20:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:52.887 01:20:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.887 01:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:52.887 01:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:54.797 01:20:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:54.797 01:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.aKe /tmp/spdk.key-sha256.wGr /tmp/spdk.key-sha384.bw5 /tmp/spdk.key-sha512.i6l /tmp/spdk.key-sha512.7yQ /tmp/spdk.key-sha384.bvN /tmp/spdk.key-sha256.2hP '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:54.797 00:19:54.797 real 2m8.594s 00:19:54.797 user 4m56.138s 00:19:54.797 sys 0m18.513s 00:19:54.797 01:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:54.797 01:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.797 ************************************ 00:19:54.797 END TEST nvmf_auth_target 00:19:54.797 ************************************ 00:19:55.058 01:20:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:55.058 01:20:17 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:19:55.058 01:20:17 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:55.058 01:20:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:19:55.058 01:20:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:55.058 01:20:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:55.058 ************************************ 00:19:55.058 START TEST nvmf_bdevio_no_huge 00:19:55.058 ************************************ 00:19:55.058 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:55.058 * Looking for test storage... 00:19:55.058 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:55.058 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:55.058 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:55.058 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:55.058 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:55.058 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:55.058 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:55.058 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:55.058 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:55.058 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:55.058 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:55.058 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:55.058 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:55.058 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:55.058 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:55.058 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:55.058 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:55.058 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:55.058 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:55.058 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:55.058 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:55.058 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:55.058 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:55.059 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.059 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.059 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.059 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:55.059 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.059 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:19:55.059 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:55.059 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:55.059 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:55.059 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:55.059 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:55.059 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:55.059 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:55.059 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:55.059 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:55.059 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:55.059 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:55.059 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:55.059 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:55.059 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:55.059 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:55.059 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:55.059 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:55.059 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:55.059 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:55.059 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:55.059 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:55.059 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:19:55.059 01:20:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:00.342 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:00.342 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:00.342 Found net devices under 0000:86:00.0: cvl_0_0 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:00.342 Found net devices under 0000:86:00.1: cvl_0_1 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:00.342 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:00.343 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:00.343 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:00.343 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:00.343 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:00.343 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:00.343 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:00.343 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:00.343 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:00.343 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:00.343 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:00.343 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:00.343 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:20:00.343 00:20:00.343 --- 10.0.0.2 ping statistics --- 00:20:00.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.343 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:20:00.343 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:00.343 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:00.343 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:20:00.343 00:20:00.343 --- 10.0.0.1 ping statistics --- 00:20:00.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.343 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:20:00.343 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:00.343 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:20:00.343 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:00.343 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:00.343 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:00.343 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:00.343 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:00.343 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:00.343 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:00.603 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:00.604 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:00.604 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:00.604 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:00.604 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:00.604 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=920361 00:20:00.604 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 920361 00:20:00.604 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 920361 ']' 00:20:00.604 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.604 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:00.604 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.604 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:00.604 01:20:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:00.604 [2024-07-25 01:20:22.880018] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:20:00.604 [2024-07-25 01:20:22.880071] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:00.604 [2024-07-25 01:20:22.942408] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:00.604 [2024-07-25 01:20:23.026939] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.604 [2024-07-25 01:20:23.026976] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.604 [2024-07-25 01:20:23.026983] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:00.604 [2024-07-25 01:20:23.026989] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:00.604 [2024-07-25 01:20:23.026994] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.604 [2024-07-25 01:20:23.027105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:00.604 [2024-07-25 01:20:23.027217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:20:00.604 [2024-07-25 01:20:23.027323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:20:00.604 [2024-07-25 01:20:23.027322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:01.545 01:20:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:01.545 01:20:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:20:01.545 01:20:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:01.545 01:20:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:01.545 01:20:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:01.545 01:20:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.545 01:20:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:01.545 01:20:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.545 01:20:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:01.545 [2024-07-25 01:20:23.754949] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:01.545 01:20:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.545 01:20:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:01.545 01:20:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.545 01:20:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:01.545 Malloc0 00:20:01.545 01:20:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.545 01:20:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:01.545 01:20:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.545 01:20:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:01.545 01:20:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.545 01:20:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:01.545 01:20:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.545 01:20:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:01.545 01:20:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.545 01:20:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:01.545 01:20:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.545 01:20:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:01.545 [2024-07-25 01:20:23.799215] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.545 01:20:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.545 01:20:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:01.545 01:20:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:01.545 01:20:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:20:01.545 01:20:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:20:01.545 01:20:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:01.545 01:20:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:01.545 { 00:20:01.545 "params": { 00:20:01.545 "name": "Nvme$subsystem", 00:20:01.545 "trtype": "$TEST_TRANSPORT", 00:20:01.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:01.545 "adrfam": "ipv4", 00:20:01.545 "trsvcid": "$NVMF_PORT", 00:20:01.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:01.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:01.545 "hdgst": ${hdgst:-false}, 00:20:01.545 "ddgst": ${ddgst:-false} 00:20:01.545 }, 00:20:01.545 "method": "bdev_nvme_attach_controller" 00:20:01.545 } 00:20:01.545 EOF 00:20:01.545 )") 00:20:01.545 01:20:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:20:01.545 01:20:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:20:01.545 01:20:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:20:01.545 01:20:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:01.545 "params": { 00:20:01.545 "name": "Nvme1", 00:20:01.545 "trtype": "tcp", 00:20:01.545 "traddr": "10.0.0.2", 00:20:01.545 "adrfam": "ipv4", 00:20:01.545 "trsvcid": "4420", 00:20:01.545 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:01.545 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:01.545 "hdgst": false, 00:20:01.545 "ddgst": false 00:20:01.545 }, 00:20:01.545 "method": "bdev_nvme_attach_controller" 00:20:01.545 }' 00:20:01.545 [2024-07-25 01:20:23.846789] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:20:01.545 [2024-07-25 01:20:23.846833] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid920591 ] 00:20:01.545 [2024-07-25 01:20:23.906269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:01.545 [2024-07-25 01:20:23.993096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:01.545 [2024-07-25 01:20:23.993190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:01.545 [2024-07-25 01:20:23.993192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.808 I/O targets: 00:20:01.808 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:01.808 00:20:01.808 00:20:01.808 CUnit - A unit testing framework for C - Version 2.1-3 00:20:01.808 http://cunit.sourceforge.net/ 00:20:01.808 00:20:01.808 00:20:01.808 Suite: bdevio tests on: Nvme1n1 00:20:01.808 Test: blockdev write read block ...passed 00:20:01.808 Test: blockdev write zeroes read block ...passed 00:20:01.808 Test: blockdev write zeroes read no split ...passed 00:20:01.808 Test: blockdev write zeroes read split ...passed 00:20:02.067 Test: blockdev write zeroes read split partial ...passed 00:20:02.067 Test: blockdev reset ...[2024-07-25 01:20:24.341471] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:02.067 [2024-07-25 01:20:24.341534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd98300 (9): Bad file descriptor 00:20:02.067 [2024-07-25 01:20:24.447986] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:02.067 passed 00:20:02.067 Test: blockdev write read 8 blocks ...passed 00:20:02.067 Test: blockdev write read size > 128k ...passed 00:20:02.067 Test: blockdev write read invalid size ...passed 00:20:02.067 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:02.067 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:02.067 Test: blockdev write read max offset ...passed 00:20:02.327 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:02.327 Test: blockdev writev readv 8 blocks ...passed 00:20:02.327 Test: blockdev writev readv 30 x 1block ...passed 00:20:02.327 Test: blockdev writev readv block ...passed 00:20:02.327 Test: blockdev writev readv size > 128k ...passed 00:20:02.327 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:02.327 Test: blockdev comparev and writev ...[2024-07-25 01:20:24.682753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:02.327 [2024-07-25 01:20:24.682784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.327 [2024-07-25 01:20:24.682799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:02.327 [2024-07-25 01:20:24.682807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:02.327 [2024-07-25 01:20:24.683321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:02.327 [2024-07-25 01:20:24.683333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:02.327 [2024-07-25 01:20:24.683344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:02.327 [2024-07-25 01:20:24.683353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:02.327 [2024-07-25 01:20:24.683849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:02.327 [2024-07-25 01:20:24.683860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:02.327 [2024-07-25 01:20:24.683872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:02.327 [2024-07-25 01:20:24.683880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:02.327 [2024-07-25 01:20:24.684372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:02.327 [2024-07-25 01:20:24.684384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:02.327 [2024-07-25 01:20:24.684396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:02.327 [2024-07-25 01:20:24.684404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:02.327 passed 00:20:02.327 Test: blockdev nvme passthru rw ...passed 00:20:02.327 Test: blockdev nvme passthru vendor specific ...[2024-07-25 01:20:24.767915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:02.327 [2024-07-25 01:20:24.767930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:02.327 [2024-07-25 01:20:24.768301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:02.327 [2024-07-25 01:20:24.768313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:02.327 [2024-07-25 01:20:24.768678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:02.327 [2024-07-25 01:20:24.768689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:02.327 [2024-07-25 01:20:24.769057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:02.327 [2024-07-25 01:20:24.769069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:02.327 passed 00:20:02.327 Test: blockdev nvme admin passthru ...passed 00:20:02.587 Test: blockdev copy ...passed 00:20:02.587 00:20:02.587 Run Summary: Type Total Ran Passed Failed Inactive 00:20:02.587 suites 1 1 n/a 0 0 00:20:02.587 tests 23 23 23 0 0 00:20:02.587 asserts 152 152 152 0 n/a 00:20:02.587 00:20:02.587 Elapsed time = 1.368 seconds 00:20:02.846 01:20:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:02.846 01:20:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.846 01:20:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:02.847 01:20:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.847 01:20:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:02.847 01:20:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:02.847 01:20:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:02.847 01:20:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:20:02.847 01:20:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:02.847 01:20:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:20:02.847 01:20:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:02.847 01:20:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:02.847 rmmod nvme_tcp 00:20:02.847 rmmod nvme_fabrics 00:20:02.847 rmmod nvme_keyring 00:20:02.847 01:20:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:02.847 01:20:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:20:02.847 01:20:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:20:02.847 01:20:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 920361 ']' 00:20:02.847 01:20:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 920361 00:20:02.847 01:20:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 920361 ']' 00:20:02.847 01:20:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 920361 00:20:02.847 01:20:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:20:02.847 01:20:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:02.847 01:20:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 920361 00:20:02.847 01:20:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:20:02.847 01:20:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:20:02.847 01:20:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 920361' 00:20:02.847 killing process with pid 920361 00:20:02.847 01:20:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 920361 00:20:02.847 01:20:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 920361 00:20:03.107 01:20:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:03.107 01:20:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:03.107 01:20:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:03.107 01:20:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:03.107 01:20:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:03.107 01:20:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:03.107 01:20:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:03.107 01:20:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:05.650 01:20:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:05.650 00:20:05.650 real 0m10.250s 00:20:05.650 user 0m13.551s 00:20:05.650 sys 0m4.849s 00:20:05.650 01:20:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:05.650 01:20:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:05.650 ************************************ 00:20:05.650 END TEST nvmf_bdevio_no_huge 00:20:05.650 ************************************ 00:20:05.650 01:20:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:05.650 01:20:27 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:05.650 01:20:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:05.650 01:20:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:05.650 01:20:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:05.650 ************************************ 00:20:05.650 START TEST nvmf_tls 00:20:05.650 ************************************ 00:20:05.650 01:20:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:05.650 * Looking for test storage... 00:20:05.650 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:05.650 01:20:27 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:05.650 01:20:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:05.650 01:20:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:05.650 01:20:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:05.650 01:20:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:05.650 01:20:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:05.650 01:20:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:05.650 01:20:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:05.650 01:20:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:05.650 01:20:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:05.650 01:20:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:05.650 01:20:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:05.650 01:20:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:05.650 01:20:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:05.650 01:20:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:05.650 01:20:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:05.650 01:20:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:05.650 01:20:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:05.650 01:20:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:05.650 01:20:27 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:05.650 01:20:27 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:05.650 01:20:27 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:05.650 01:20:27 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.651 01:20:27 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.651 01:20:27 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.651 01:20:27 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:05.651 01:20:27 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.651 01:20:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:20:05.651 01:20:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:05.651 01:20:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:05.651 01:20:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:05.651 01:20:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:05.651 01:20:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:05.651 01:20:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:05.651 01:20:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:05.651 01:20:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:05.651 01:20:27 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:05.651 01:20:27 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:20:05.651 01:20:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:05.651 01:20:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:05.651 01:20:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:05.651 01:20:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:05.651 01:20:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:05.651 01:20:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.651 01:20:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:05.651 01:20:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:05.651 01:20:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:05.651 01:20:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:05.651 01:20:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:20:05.651 01:20:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:11.000 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:11.000 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:11.000 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:11.001 Found net devices under 0000:86:00.0: cvl_0_0 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:11.001 Found net devices under 0000:86:00.1: cvl_0_1 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:11.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:11.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:20:11.001 00:20:11.001 --- 10.0.0.2 ping statistics --- 00:20:11.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.001 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:11.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:11.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:20:11.001 00:20:11.001 --- 10.0.0.1 ping statistics --- 00:20:11.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.001 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=924254 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 924254 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 924254 ']' 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:11.001 01:20:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.261 [2024-07-25 01:20:33.505027] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:20:11.261 [2024-07-25 01:20:33.505079] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:11.261 EAL: No free 2048 kB hugepages reported on node 1 00:20:11.261 [2024-07-25 01:20:33.564257] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.261 [2024-07-25 01:20:33.643351] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:11.261 [2024-07-25 01:20:33.643387] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:11.261 [2024-07-25 01:20:33.643394] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:11.261 [2024-07-25 01:20:33.643400] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:11.261 [2024-07-25 01:20:33.643405] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:11.261 [2024-07-25 01:20:33.643427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:11.831 01:20:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:11.831 01:20:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:11.831 01:20:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:11.831 01:20:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:11.831 01:20:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.091 01:20:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:12.091 01:20:34 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:20:12.091 01:20:34 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:12.091 true 00:20:12.091 01:20:34 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:12.091 01:20:34 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:20:12.351 01:20:34 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:20:12.351 01:20:34 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:20:12.351 01:20:34 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:12.611 01:20:34 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:12.611 01:20:34 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:20:12.611 01:20:35 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:20:12.611 01:20:35 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:20:12.611 01:20:35 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:12.870 01:20:35 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:12.870 01:20:35 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:20:13.131 01:20:35 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:20:13.131 01:20:35 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:20:13.131 01:20:35 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:20:13.131 01:20:35 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:13.131 01:20:35 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:20:13.131 01:20:35 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:20:13.131 01:20:35 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:13.391 01:20:35 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:13.391 01:20:35 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:20:13.651 01:20:35 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:20:13.651 01:20:35 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:20:13.651 01:20:35 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:13.651 01:20:36 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:13.651 01:20:36 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:20:13.935 01:20:36 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:20:13.935 01:20:36 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:20:13.935 01:20:36 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:13.935 01:20:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:13.935 01:20:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:13.935 01:20:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:13.935 01:20:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:13.935 01:20:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:13.935 01:20:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:13.935 01:20:36 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:13.935 01:20:36 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:13.935 01:20:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:13.935 01:20:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:13.935 01:20:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:13.935 01:20:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:20:13.935 01:20:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:13.935 01:20:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:13.935 01:20:36 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:13.935 01:20:36 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:20:13.935 01:20:36 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.LkPMsUNZ92 00:20:13.935 01:20:36 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:13.935 01:20:36 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.L5192VaEnJ 00:20:13.935 01:20:36 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:13.935 01:20:36 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:13.935 01:20:36 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.LkPMsUNZ92 00:20:13.935 01:20:36 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.L5192VaEnJ 00:20:13.935 01:20:36 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:14.195 01:20:36 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:14.455 01:20:36 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.LkPMsUNZ92 00:20:14.455 01:20:36 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.LkPMsUNZ92 00:20:14.455 01:20:36 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:14.455 [2024-07-25 01:20:36.929913] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.455 01:20:36 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:14.714 01:20:37 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:14.975 [2024-07-25 01:20:37.262764] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:14.975 [2024-07-25 01:20:37.262933] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:14.975 01:20:37 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:14.975 malloc0 00:20:14.975 01:20:37 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:15.235 01:20:37 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LkPMsUNZ92 00:20:15.495 [2024-07-25 01:20:37.788161] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:15.495 01:20:37 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.LkPMsUNZ92 00:20:15.495 EAL: No free 2048 kB hugepages reported on node 1 00:20:25.487 Initializing NVMe Controllers 00:20:25.487 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:25.487 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:25.487 Initialization complete. Launching workers. 00:20:25.487 ======================================================== 00:20:25.487 Latency(us) 00:20:25.487 Device Information : IOPS MiB/s Average min max 00:20:25.487 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16452.40 64.27 3890.43 855.07 7047.90 00:20:25.487 ======================================================== 00:20:25.487 Total : 16452.40 64.27 3890.43 855.07 7047.90 00:20:25.487 00:20:25.487 01:20:47 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LkPMsUNZ92 00:20:25.487 01:20:47 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:25.487 01:20:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:25.487 01:20:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:25.487 01:20:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.LkPMsUNZ92' 00:20:25.487 01:20:47 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:25.487 01:20:47 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=926712 00:20:25.487 01:20:47 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:25.487 01:20:47 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 926712 /var/tmp/bdevperf.sock 00:20:25.487 01:20:47 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:25.487 01:20:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 926712 ']' 00:20:25.487 01:20:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:25.487 01:20:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:25.487 01:20:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:25.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:25.487 01:20:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:25.487 01:20:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.487 [2024-07-25 01:20:47.949922] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:20:25.487 [2024-07-25 01:20:47.949970] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid926712 ] 00:20:25.487 EAL: No free 2048 kB hugepages reported on node 1 00:20:25.748 [2024-07-25 01:20:47.999029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.748 [2024-07-25 01:20:48.076894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.318 01:20:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:26.318 01:20:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:26.318 01:20:48 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LkPMsUNZ92 00:20:26.579 [2024-07-25 01:20:48.934462] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:26.579 [2024-07-25 01:20:48.934532] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:26.579 TLSTESTn1 00:20:26.579 01:20:49 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:26.839 Running I/O for 10 seconds... 00:20:36.887 00:20:36.887 Latency(us) 00:20:36.887 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.887 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:36.887 Verification LBA range: start 0x0 length 0x2000 00:20:36.887 TLSTESTn1 : 10.08 1323.04 5.17 0.00 0.00 96436.85 7294.44 179625.63 00:20:36.887 =================================================================================================================== 00:20:36.887 Total : 1323.04 5.17 0.00 0.00 96436.85 7294.44 179625.63 00:20:36.887 0 00:20:36.887 01:20:59 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:36.887 01:20:59 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 926712 00:20:36.887 01:20:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 926712 ']' 00:20:36.887 01:20:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 926712 00:20:36.887 01:20:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:36.887 01:20:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:36.887 01:20:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 926712 00:20:36.887 01:20:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:36.887 01:20:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:36.887 01:20:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 926712' 00:20:36.887 killing process with pid 926712 00:20:36.887 01:20:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 926712 00:20:36.887 Received shutdown signal, test time was about 10.000000 seconds 00:20:36.887 00:20:36.887 Latency(us) 00:20:36.887 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.887 =================================================================================================================== 00:20:36.887 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:36.887 [2024-07-25 01:20:59.297211] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:36.887 01:20:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 926712 00:20:37.147 01:20:59 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.L5192VaEnJ 00:20:37.147 01:20:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:37.147 01:20:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.L5192VaEnJ 00:20:37.147 01:20:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:37.147 01:20:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:37.147 01:20:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:37.147 01:20:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:37.147 01:20:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.L5192VaEnJ 00:20:37.147 01:20:59 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:37.148 01:20:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:37.148 01:20:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:37.148 01:20:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.L5192VaEnJ' 00:20:37.148 01:20:59 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:37.148 01:20:59 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=928550 00:20:37.148 01:20:59 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:37.148 01:20:59 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:37.148 01:20:59 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 928550 /var/tmp/bdevperf.sock 00:20:37.148 01:20:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 928550 ']' 00:20:37.148 01:20:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:37.148 01:20:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:37.148 01:20:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:37.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:37.148 01:20:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:37.148 01:20:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.148 [2024-07-25 01:20:59.527561] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:20:37.148 [2024-07-25 01:20:59.527610] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid928550 ] 00:20:37.148 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.148 [2024-07-25 01:20:59.577597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.408 [2024-07-25 01:20:59.662998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:37.979 01:21:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:37.979 01:21:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:37.979 01:21:00 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.L5192VaEnJ 00:20:38.239 [2024-07-25 01:21:00.497776] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:38.239 [2024-07-25 01:21:00.497848] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:38.239 [2024-07-25 01:21:00.508558] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:38.239 [2024-07-25 01:21:00.509227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa2d570 (107): Transport endpoint is not connected 00:20:38.239 [2024-07-25 01:21:00.510220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa2d570 (9): Bad file descriptor 00:20:38.239 [2024-07-25 01:21:00.511221] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.239 [2024-07-25 01:21:00.511231] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:38.239 [2024-07-25 01:21:00.511242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.239 request: 00:20:38.239 { 00:20:38.239 "name": "TLSTEST", 00:20:38.239 "trtype": "tcp", 00:20:38.239 "traddr": "10.0.0.2", 00:20:38.239 "adrfam": "ipv4", 00:20:38.239 "trsvcid": "4420", 00:20:38.239 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.239 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:38.239 "prchk_reftag": false, 00:20:38.239 "prchk_guard": false, 00:20:38.239 "hdgst": false, 00:20:38.239 "ddgst": false, 00:20:38.239 "psk": "/tmp/tmp.L5192VaEnJ", 00:20:38.239 "method": "bdev_nvme_attach_controller", 00:20:38.239 "req_id": 1 00:20:38.239 } 00:20:38.239 Got JSON-RPC error response 00:20:38.239 response: 00:20:38.239 { 00:20:38.239 "code": -5, 00:20:38.239 "message": "Input/output error" 00:20:38.239 } 00:20:38.239 01:21:00 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 928550 00:20:38.239 01:21:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 928550 ']' 00:20:38.239 01:21:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 928550 00:20:38.239 01:21:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:38.239 01:21:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:38.239 01:21:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 928550 00:20:38.239 01:21:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:38.239 01:21:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:38.240 01:21:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 928550' 00:20:38.240 killing process with pid 928550 00:20:38.240 01:21:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 928550 00:20:38.240 Received shutdown signal, test time was about 10.000000 seconds 00:20:38.240 00:20:38.240 Latency(us) 00:20:38.240 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.240 =================================================================================================================== 00:20:38.240 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:38.240 [2024-07-25 01:21:00.573887] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:38.240 01:21:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 928550 00:20:38.500 01:21:00 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:38.500 01:21:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:38.500 01:21:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:38.500 01:21:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:38.500 01:21:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:38.500 01:21:00 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.LkPMsUNZ92 00:20:38.500 01:21:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:38.500 01:21:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.LkPMsUNZ92 00:20:38.500 01:21:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:38.500 01:21:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:38.500 01:21:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:38.500 01:21:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:38.500 01:21:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.LkPMsUNZ92 00:20:38.500 01:21:00 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:38.500 01:21:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:38.500 01:21:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:38.500 01:21:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.LkPMsUNZ92' 00:20:38.500 01:21:00 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:38.500 01:21:00 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=928839 00:20:38.500 01:21:00 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:38.500 01:21:00 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:38.500 01:21:00 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 928839 /var/tmp/bdevperf.sock 00:20:38.500 01:21:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 928839 ']' 00:20:38.500 01:21:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:38.500 01:21:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:38.500 01:21:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:38.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:38.500 01:21:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:38.500 01:21:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.500 [2024-07-25 01:21:00.793578] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:20:38.500 [2024-07-25 01:21:00.793623] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid928839 ] 00:20:38.500 EAL: No free 2048 kB hugepages reported on node 1 00:20:38.500 [2024-07-25 01:21:00.843028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.500 [2024-07-25 01:21:00.921797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:39.441 01:21:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:39.441 01:21:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:39.441 01:21:01 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.LkPMsUNZ92 00:20:39.441 [2024-07-25 01:21:01.755346] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:39.441 [2024-07-25 01:21:01.755410] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:39.441 [2024-07-25 01:21:01.761632] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:39.441 [2024-07-25 01:21:01.761655] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:39.441 [2024-07-25 01:21:01.761680] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:39.441 [2024-07-25 01:21:01.762863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a0570 (107): Transport endpoint is not connected 00:20:39.441 [2024-07-25 01:21:01.763856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a0570 (9): Bad file descriptor 00:20:39.441 [2024-07-25 01:21:01.764861] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:39.441 [2024-07-25 01:21:01.764871] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:39.441 [2024-07-25 01:21:01.764883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:39.441 request: 00:20:39.441 { 00:20:39.441 "name": "TLSTEST", 00:20:39.441 "trtype": "tcp", 00:20:39.441 "traddr": "10.0.0.2", 00:20:39.441 "adrfam": "ipv4", 00:20:39.441 "trsvcid": "4420", 00:20:39.441 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:39.441 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:39.441 "prchk_reftag": false, 00:20:39.441 "prchk_guard": false, 00:20:39.441 "hdgst": false, 00:20:39.441 "ddgst": false, 00:20:39.441 "psk": "/tmp/tmp.LkPMsUNZ92", 00:20:39.441 "method": "bdev_nvme_attach_controller", 00:20:39.441 "req_id": 1 00:20:39.441 } 00:20:39.441 Got JSON-RPC error response 00:20:39.441 response: 00:20:39.441 { 00:20:39.441 "code": -5, 00:20:39.441 "message": "Input/output error" 00:20:39.441 } 00:20:39.441 01:21:01 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 928839 00:20:39.441 01:21:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 928839 ']' 00:20:39.441 01:21:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 928839 00:20:39.441 01:21:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:39.441 01:21:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:39.441 01:21:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 928839 00:20:39.441 01:21:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:39.441 01:21:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:39.441 01:21:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 928839' 00:20:39.441 killing process with pid 928839 00:20:39.441 01:21:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 928839 00:20:39.441 Received shutdown signal, test time was about 10.000000 seconds 00:20:39.441 00:20:39.441 Latency(us) 00:20:39.441 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.441 =================================================================================================================== 00:20:39.441 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:39.441 [2024-07-25 01:21:01.827975] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:39.441 01:21:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 928839 00:20:39.702 01:21:01 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:39.702 01:21:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:39.702 01:21:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:39.702 01:21:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:39.702 01:21:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:39.702 01:21:01 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.LkPMsUNZ92 00:20:39.702 01:21:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:39.702 01:21:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.LkPMsUNZ92 00:20:39.702 01:21:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:39.702 01:21:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:39.702 01:21:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:39.702 01:21:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:39.702 01:21:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.LkPMsUNZ92 00:20:39.702 01:21:02 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:39.702 01:21:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:39.702 01:21:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:39.702 01:21:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.LkPMsUNZ92' 00:20:39.702 01:21:02 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:39.702 01:21:02 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=929159 00:20:39.702 01:21:02 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:39.702 01:21:02 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:39.702 01:21:02 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 929159 /var/tmp/bdevperf.sock 00:20:39.702 01:21:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 929159 ']' 00:20:39.702 01:21:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:39.702 01:21:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:39.702 01:21:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:39.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:39.702 01:21:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:39.702 01:21:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:39.702 [2024-07-25 01:21:02.054456] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:20:39.702 [2024-07-25 01:21:02.054507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid929159 ] 00:20:39.702 EAL: No free 2048 kB hugepages reported on node 1 00:20:39.702 [2024-07-25 01:21:02.105879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.702 [2024-07-25 01:21:02.174437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:40.642 01:21:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:40.642 01:21:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:40.642 01:21:02 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LkPMsUNZ92 00:20:40.642 [2024-07-25 01:21:03.016928] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:40.642 [2024-07-25 01:21:03.016998] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:40.642 [2024-07-25 01:21:03.024857] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:40.642 [2024-07-25 01:21:03.024878] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:40.642 [2024-07-25 01:21:03.024901] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:40.642 [2024-07-25 01:21:03.025447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ed570 (107): Transport endpoint is not connected 00:20:40.642 [2024-07-25 01:21:03.026441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ed570 (9): Bad file descriptor 00:20:40.642 [2024-07-25 01:21:03.027442] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:40.642 [2024-07-25 01:21:03.027452] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:40.642 [2024-07-25 01:21:03.027461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:40.642 request: 00:20:40.642 { 00:20:40.642 "name": "TLSTEST", 00:20:40.642 "trtype": "tcp", 00:20:40.642 "traddr": "10.0.0.2", 00:20:40.642 "adrfam": "ipv4", 00:20:40.642 "trsvcid": "4420", 00:20:40.642 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:40.642 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:40.642 "prchk_reftag": false, 00:20:40.642 "prchk_guard": false, 00:20:40.642 "hdgst": false, 00:20:40.642 "ddgst": false, 00:20:40.642 "psk": "/tmp/tmp.LkPMsUNZ92", 00:20:40.642 "method": "bdev_nvme_attach_controller", 00:20:40.642 "req_id": 1 00:20:40.642 } 00:20:40.642 Got JSON-RPC error response 00:20:40.642 response: 00:20:40.642 { 00:20:40.642 "code": -5, 00:20:40.642 "message": "Input/output error" 00:20:40.642 } 00:20:40.642 01:21:03 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 929159 00:20:40.642 01:21:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 929159 ']' 00:20:40.642 01:21:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 929159 00:20:40.642 01:21:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:40.642 01:21:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:40.642 01:21:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 929159 00:20:40.642 01:21:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:40.642 01:21:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:40.642 01:21:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 929159' 00:20:40.642 killing process with pid 929159 00:20:40.642 01:21:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 929159 00:20:40.642 Received shutdown signal, test time was about 10.000000 seconds 00:20:40.642 00:20:40.642 Latency(us) 00:20:40.642 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.642 =================================================================================================================== 00:20:40.642 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:40.642 [2024-07-25 01:21:03.093116] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:40.642 01:21:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 929159 00:20:40.902 01:21:03 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:40.902 01:21:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:40.902 01:21:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:40.902 01:21:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:40.902 01:21:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:40.902 01:21:03 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:40.902 01:21:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:40.902 01:21:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:40.902 01:21:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:40.902 01:21:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:40.902 01:21:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:40.902 01:21:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:40.902 01:21:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:40.902 01:21:03 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:40.902 01:21:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:40.902 01:21:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:40.902 01:21:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:40.902 01:21:03 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:40.902 01:21:03 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=929404 00:20:40.902 01:21:03 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:40.902 01:21:03 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:40.902 01:21:03 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 929404 /var/tmp/bdevperf.sock 00:20:40.902 01:21:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 929404 ']' 00:20:40.902 01:21:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:40.902 01:21:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:40.902 01:21:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:40.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:40.902 01:21:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:40.902 01:21:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:40.902 [2024-07-25 01:21:03.312933] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:20:40.902 [2024-07-25 01:21:03.312984] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid929404 ] 00:20:40.902 EAL: No free 2048 kB hugepages reported on node 1 00:20:40.902 [2024-07-25 01:21:03.363746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.161 [2024-07-25 01:21:03.432550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:41.729 01:21:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:41.729 01:21:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:41.729 01:21:04 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:41.989 [2024-07-25 01:21:04.278070] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:41.989 [2024-07-25 01:21:04.280221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x662af0 (9): Bad file descriptor 00:20:41.989 [2024-07-25 01:21:04.281223] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:41.989 [2024-07-25 01:21:04.281234] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:41.989 [2024-07-25 01:21:04.281242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:41.989 request: 00:20:41.989 { 00:20:41.989 "name": "TLSTEST", 00:20:41.989 "trtype": "tcp", 00:20:41.989 "traddr": "10.0.0.2", 00:20:41.989 "adrfam": "ipv4", 00:20:41.989 "trsvcid": "4420", 00:20:41.989 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.989 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:41.989 "prchk_reftag": false, 00:20:41.989 "prchk_guard": false, 00:20:41.989 "hdgst": false, 00:20:41.989 "ddgst": false, 00:20:41.989 "method": "bdev_nvme_attach_controller", 00:20:41.989 "req_id": 1 00:20:41.989 } 00:20:41.989 Got JSON-RPC error response 00:20:41.989 response: 00:20:41.989 { 00:20:41.989 "code": -5, 00:20:41.989 "message": "Input/output error" 00:20:41.989 } 00:20:41.989 01:21:04 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 929404 00:20:41.989 01:21:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 929404 ']' 00:20:41.989 01:21:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 929404 00:20:41.989 01:21:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:41.989 01:21:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:41.989 01:21:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 929404 00:20:41.989 01:21:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:41.989 01:21:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:41.989 01:21:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 929404' 00:20:41.989 killing process with pid 929404 00:20:41.989 01:21:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 929404 00:20:41.989 Received shutdown signal, test time was about 10.000000 seconds 00:20:41.989 00:20:41.989 Latency(us) 00:20:41.989 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.989 =================================================================================================================== 00:20:41.989 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:41.989 01:21:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 929404 00:20:42.249 01:21:04 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:42.249 01:21:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:42.249 01:21:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:42.249 01:21:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:42.249 01:21:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:42.249 01:21:04 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 924254 00:20:42.249 01:21:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 924254 ']' 00:20:42.249 01:21:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 924254 00:20:42.249 01:21:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:42.249 01:21:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:42.249 01:21:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 924254 00:20:42.249 01:21:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:42.249 01:21:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:42.249 01:21:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 924254' 00:20:42.249 killing process with pid 924254 00:20:42.249 01:21:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 924254 00:20:42.249 [2024-07-25 01:21:04.561809] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:42.249 01:21:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 924254 00:20:42.509 01:21:04 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:42.509 01:21:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:42.509 01:21:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:42.509 01:21:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:42.509 01:21:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:42.510 01:21:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:20:42.510 01:21:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:42.510 01:21:04 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:42.510 01:21:04 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:20:42.510 01:21:04 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.g1HZ5lEdaB 00:20:42.510 01:21:04 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:42.510 01:21:04 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.g1HZ5lEdaB 00:20:42.510 01:21:04 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:20:42.510 01:21:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:42.510 01:21:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:42.510 01:21:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:42.510 01:21:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=929656 00:20:42.510 01:21:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:42.510 01:21:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 929656 00:20:42.510 01:21:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 929656 ']' 00:20:42.510 01:21:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.510 01:21:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:42.510 01:21:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.510 01:21:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:42.510 01:21:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:42.510 [2024-07-25 01:21:04.840611] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:20:42.510 [2024-07-25 01:21:04.840655] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:42.510 EAL: No free 2048 kB hugepages reported on node 1 00:20:42.510 [2024-07-25 01:21:04.894785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.510 [2024-07-25 01:21:04.965353] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:42.510 [2024-07-25 01:21:04.965392] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:42.510 [2024-07-25 01:21:04.965399] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:42.510 [2024-07-25 01:21:04.965404] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:42.510 [2024-07-25 01:21:04.965409] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:42.510 [2024-07-25 01:21:04.965425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.449 01:21:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:43.449 01:21:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:43.449 01:21:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:43.449 01:21:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:43.449 01:21:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:43.449 01:21:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.449 01:21:05 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.g1HZ5lEdaB 00:20:43.449 01:21:05 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.g1HZ5lEdaB 00:20:43.449 01:21:05 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:43.449 [2024-07-25 01:21:05.832402] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:43.449 01:21:05 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:43.709 01:21:06 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:43.709 [2024-07-25 01:21:06.177328] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:43.709 [2024-07-25 01:21:06.177508] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:43.709 01:21:06 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:43.968 malloc0 00:20:43.968 01:21:06 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:44.228 01:21:06 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.g1HZ5lEdaB 00:20:44.228 [2024-07-25 01:21:06.666766] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:44.228 01:21:06 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.g1HZ5lEdaB 00:20:44.228 01:21:06 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:44.229 01:21:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:44.229 01:21:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:44.229 01:21:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.g1HZ5lEdaB' 00:20:44.229 01:21:06 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:44.229 01:21:06 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:44.229 01:21:06 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=929918 00:20:44.229 01:21:06 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:44.229 01:21:06 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 929918 /var/tmp/bdevperf.sock 00:20:44.229 01:21:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 929918 ']' 00:20:44.229 01:21:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:44.229 01:21:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:44.229 01:21:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:44.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:44.229 01:21:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:44.229 01:21:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:44.229 [2024-07-25 01:21:06.708892] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:20:44.229 [2024-07-25 01:21:06.708935] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid929918 ] 00:20:44.489 EAL: No free 2048 kB hugepages reported on node 1 00:20:44.489 [2024-07-25 01:21:06.758556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.489 [2024-07-25 01:21:06.837220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.489 01:21:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:44.489 01:21:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:44.489 01:21:06 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.g1HZ5lEdaB 00:20:44.748 [2024-07-25 01:21:07.081271] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:44.748 [2024-07-25 01:21:07.081340] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:44.748 TLSTESTn1 00:20:44.748 01:21:07 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:45.008 Running I/O for 10 seconds... 00:20:54.995 00:20:54.995 Latency(us) 00:20:54.995 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.995 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:54.995 Verification LBA range: start 0x0 length 0x2000 00:20:54.995 TLSTESTn1 : 10.08 1317.31 5.15 0.00 0.00 96855.78 6952.51 150447.86 00:20:54.995 =================================================================================================================== 00:20:54.995 Total : 1317.31 5.15 0.00 0.00 96855.78 6952.51 150447.86 00:20:54.995 0 00:20:54.995 01:21:17 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:54.995 01:21:17 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 929918 00:20:54.995 01:21:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 929918 ']' 00:20:54.995 01:21:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 929918 00:20:54.995 01:21:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:54.995 01:21:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:54.995 01:21:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 929918 00:20:54.995 01:21:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:54.995 01:21:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:54.995 01:21:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 929918' 00:20:54.995 killing process with pid 929918 00:20:54.995 01:21:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 929918 00:20:54.995 Received shutdown signal, test time was about 10.000000 seconds 00:20:54.995 00:20:54.995 Latency(us) 00:20:54.995 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.995 =================================================================================================================== 00:20:54.995 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:54.995 [2024-07-25 01:21:17.451661] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:54.995 01:21:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 929918 00:20:55.255 01:21:17 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.g1HZ5lEdaB 00:20:55.255 01:21:17 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.g1HZ5lEdaB 00:20:55.255 01:21:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:55.255 01:21:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.g1HZ5lEdaB 00:20:55.255 01:21:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:55.255 01:21:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:55.255 01:21:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:55.255 01:21:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:55.255 01:21:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.g1HZ5lEdaB 00:20:55.255 01:21:17 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:55.255 01:21:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:55.255 01:21:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:55.255 01:21:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.g1HZ5lEdaB' 00:20:55.255 01:21:17 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:55.255 01:21:17 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=932138 00:20:55.255 01:21:17 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:55.255 01:21:17 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:55.255 01:21:17 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 932138 /var/tmp/bdevperf.sock 00:20:55.255 01:21:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 932138 ']' 00:20:55.255 01:21:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:55.255 01:21:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:55.255 01:21:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:55.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:55.255 01:21:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:55.255 01:21:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:55.255 [2024-07-25 01:21:17.683337] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:20:55.255 [2024-07-25 01:21:17.683387] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid932138 ] 00:20:55.255 EAL: No free 2048 kB hugepages reported on node 1 00:20:55.255 [2024-07-25 01:21:17.732496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.516 [2024-07-25 01:21:17.812185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:56.086 01:21:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:56.086 01:21:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:56.086 01:21:18 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.g1HZ5lEdaB 00:20:56.347 [2024-07-25 01:21:18.649871] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:56.347 [2024-07-25 01:21:18.649916] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:56.347 [2024-07-25 01:21:18.649923] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.g1HZ5lEdaB 00:20:56.347 request: 00:20:56.347 { 00:20:56.347 "name": "TLSTEST", 00:20:56.347 "trtype": "tcp", 00:20:56.347 "traddr": "10.0.0.2", 00:20:56.347 "adrfam": "ipv4", 00:20:56.347 "trsvcid": "4420", 00:20:56.347 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.347 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:56.347 "prchk_reftag": false, 00:20:56.347 "prchk_guard": false, 00:20:56.347 "hdgst": false, 00:20:56.347 "ddgst": false, 00:20:56.347 "psk": "/tmp/tmp.g1HZ5lEdaB", 00:20:56.347 "method": "bdev_nvme_attach_controller", 00:20:56.347 "req_id": 1 00:20:56.347 } 00:20:56.347 Got JSON-RPC error response 00:20:56.347 response: 00:20:56.347 { 00:20:56.347 "code": -1, 00:20:56.347 "message": "Operation not permitted" 00:20:56.347 } 00:20:56.347 01:21:18 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 932138 00:20:56.347 01:21:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 932138 ']' 00:20:56.347 01:21:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 932138 00:20:56.347 01:21:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:56.347 01:21:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:56.347 01:21:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 932138 00:20:56.347 01:21:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:56.347 01:21:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:56.347 01:21:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 932138' 00:20:56.347 killing process with pid 932138 00:20:56.347 01:21:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 932138 00:20:56.347 Received shutdown signal, test time was about 10.000000 seconds 00:20:56.347 00:20:56.347 Latency(us) 00:20:56.347 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.347 =================================================================================================================== 00:20:56.347 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:56.347 01:21:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 932138 00:20:56.608 01:21:18 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:56.608 01:21:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:56.608 01:21:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:56.608 01:21:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:56.608 01:21:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:56.608 01:21:18 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 929656 00:20:56.608 01:21:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 929656 ']' 00:20:56.608 01:21:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 929656 00:20:56.608 01:21:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:56.608 01:21:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:56.608 01:21:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 929656 00:20:56.608 01:21:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:56.608 01:21:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:56.608 01:21:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 929656' 00:20:56.608 killing process with pid 929656 00:20:56.608 01:21:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 929656 00:20:56.608 [2024-07-25 01:21:18.929992] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:56.608 01:21:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 929656 00:20:56.868 01:21:19 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:20:56.868 01:21:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:56.868 01:21:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:56.868 01:21:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:56.868 01:21:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=932381 00:20:56.868 01:21:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:56.868 01:21:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 932381 00:20:56.868 01:21:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 932381 ']' 00:20:56.868 01:21:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.868 01:21:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:56.868 01:21:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.868 01:21:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:56.868 01:21:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:56.868 [2024-07-25 01:21:19.173629] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:20:56.868 [2024-07-25 01:21:19.173678] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:56.868 EAL: No free 2048 kB hugepages reported on node 1 00:20:56.868 [2024-07-25 01:21:19.232434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.868 [2024-07-25 01:21:19.309226] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:56.868 [2024-07-25 01:21:19.309262] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:56.868 [2024-07-25 01:21:19.309269] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:56.868 [2024-07-25 01:21:19.309275] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:56.868 [2024-07-25 01:21:19.309280] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:56.868 [2024-07-25 01:21:19.309298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:57.807 01:21:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:57.807 01:21:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:57.807 01:21:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:57.807 01:21:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:57.807 01:21:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:57.807 01:21:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:57.807 01:21:20 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.g1HZ5lEdaB 00:20:57.807 01:21:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:57.807 01:21:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.g1HZ5lEdaB 00:20:57.807 01:21:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:20:57.807 01:21:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:57.807 01:21:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:20:57.807 01:21:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:57.807 01:21:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.g1HZ5lEdaB 00:20:57.807 01:21:20 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.g1HZ5lEdaB 00:20:57.807 01:21:20 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:57.807 [2024-07-25 01:21:20.175756] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:57.807 01:21:20 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:58.069 01:21:20 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:58.069 [2024-07-25 01:21:20.516619] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:58.069 [2024-07-25 01:21:20.516783] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:58.069 01:21:20 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:58.395 malloc0 00:20:58.395 01:21:20 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:58.395 01:21:20 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.g1HZ5lEdaB 00:20:58.656 [2024-07-25 01:21:21.022088] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:58.656 [2024-07-25 01:21:21.022116] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:58.656 [2024-07-25 01:21:21.022139] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:58.656 request: 00:20:58.656 { 00:20:58.656 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.656 "host": "nqn.2016-06.io.spdk:host1", 00:20:58.656 "psk": "/tmp/tmp.g1HZ5lEdaB", 00:20:58.656 "method": "nvmf_subsystem_add_host", 00:20:58.656 "req_id": 1 00:20:58.656 } 00:20:58.656 Got JSON-RPC error response 00:20:58.656 response: 00:20:58.656 { 00:20:58.656 "code": -32603, 00:20:58.656 "message": "Internal error" 00:20:58.656 } 00:20:58.656 01:21:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:58.656 01:21:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:58.656 01:21:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:58.656 01:21:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:58.656 01:21:21 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 932381 00:20:58.656 01:21:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 932381 ']' 00:20:58.656 01:21:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 932381 00:20:58.656 01:21:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:58.656 01:21:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:58.656 01:21:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 932381 00:20:58.656 01:21:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:58.656 01:21:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:58.656 01:21:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 932381' 00:20:58.656 killing process with pid 932381 00:20:58.656 01:21:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 932381 00:20:58.656 01:21:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 932381 00:20:58.916 01:21:21 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.g1HZ5lEdaB 00:20:58.916 01:21:21 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:58.916 01:21:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:58.916 01:21:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:58.916 01:21:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:58.916 01:21:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=932788 00:20:58.916 01:21:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 932788 00:20:58.916 01:21:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:58.916 01:21:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 932788 ']' 00:20:58.916 01:21:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:58.916 01:21:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:58.916 01:21:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:58.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:58.916 01:21:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:58.916 01:21:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:58.916 [2024-07-25 01:21:21.336830] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:20:58.916 [2024-07-25 01:21:21.336879] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:58.916 EAL: No free 2048 kB hugepages reported on node 1 00:20:58.916 [2024-07-25 01:21:21.395594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.176 [2024-07-25 01:21:21.469292] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:59.176 [2024-07-25 01:21:21.469333] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:59.176 [2024-07-25 01:21:21.469340] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:59.176 [2024-07-25 01:21:21.469346] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:59.176 [2024-07-25 01:21:21.469351] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:59.176 [2024-07-25 01:21:21.469369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.746 01:21:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:59.746 01:21:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:59.746 01:21:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:59.746 01:21:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:59.746 01:21:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.746 01:21:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:59.746 01:21:22 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.g1HZ5lEdaB 00:20:59.746 01:21:22 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.g1HZ5lEdaB 00:20:59.746 01:21:22 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:00.006 [2024-07-25 01:21:22.319061] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:00.006 01:21:22 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:00.266 01:21:22 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:00.266 [2024-07-25 01:21:22.651937] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:00.266 [2024-07-25 01:21:22.652112] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:00.266 01:21:22 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:00.526 malloc0 00:21:00.526 01:21:22 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:00.786 01:21:23 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.g1HZ5lEdaB 00:21:00.786 [2024-07-25 01:21:23.177404] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:00.786 01:21:23 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:00.786 01:21:23 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=933132 00:21:00.786 01:21:23 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:00.786 01:21:23 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 933132 /var/tmp/bdevperf.sock 00:21:00.786 01:21:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 933132 ']' 00:21:00.786 01:21:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:00.786 01:21:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:00.786 01:21:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:00.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:00.786 01:21:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:00.786 01:21:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.786 [2024-07-25 01:21:23.223227] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:21:00.786 [2024-07-25 01:21:23.223275] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid933132 ] 00:21:00.786 EAL: No free 2048 kB hugepages reported on node 1 00:21:00.786 [2024-07-25 01:21:23.274575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.048 [2024-07-25 01:21:23.347708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:01.048 01:21:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:01.048 01:21:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:01.048 01:21:23 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.g1HZ5lEdaB 00:21:01.307 [2024-07-25 01:21:23.587680] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:01.307 [2024-07-25 01:21:23.587755] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:01.307 TLSTESTn1 00:21:01.307 01:21:23 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:01.568 01:21:23 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:21:01.568 "subsystems": [ 00:21:01.568 { 00:21:01.568 "subsystem": "keyring", 00:21:01.568 "config": [] 00:21:01.568 }, 00:21:01.568 { 00:21:01.568 "subsystem": "iobuf", 00:21:01.568 "config": [ 00:21:01.568 { 00:21:01.568 "method": "iobuf_set_options", 00:21:01.568 "params": { 00:21:01.568 "small_pool_count": 8192, 00:21:01.568 "large_pool_count": 1024, 00:21:01.568 "small_bufsize": 8192, 00:21:01.568 "large_bufsize": 135168 00:21:01.568 } 00:21:01.568 } 00:21:01.568 ] 00:21:01.568 }, 00:21:01.568 { 00:21:01.568 "subsystem": "sock", 00:21:01.568 "config": [ 00:21:01.568 { 00:21:01.568 "method": "sock_set_default_impl", 00:21:01.568 "params": { 00:21:01.568 "impl_name": "posix" 00:21:01.568 } 00:21:01.568 }, 00:21:01.568 { 00:21:01.568 "method": "sock_impl_set_options", 00:21:01.568 "params": { 00:21:01.568 "impl_name": "ssl", 00:21:01.568 "recv_buf_size": 4096, 00:21:01.568 "send_buf_size": 4096, 00:21:01.568 "enable_recv_pipe": true, 00:21:01.568 "enable_quickack": false, 00:21:01.568 "enable_placement_id": 0, 00:21:01.568 "enable_zerocopy_send_server": true, 00:21:01.568 "enable_zerocopy_send_client": false, 00:21:01.568 "zerocopy_threshold": 0, 00:21:01.568 "tls_version": 0, 00:21:01.568 "enable_ktls": false 00:21:01.568 } 00:21:01.568 }, 00:21:01.568 { 00:21:01.568 "method": "sock_impl_set_options", 00:21:01.568 "params": { 00:21:01.568 "impl_name": "posix", 00:21:01.568 "recv_buf_size": 2097152, 00:21:01.568 "send_buf_size": 2097152, 00:21:01.568 "enable_recv_pipe": true, 00:21:01.568 "enable_quickack": false, 00:21:01.568 "enable_placement_id": 0, 00:21:01.568 "enable_zerocopy_send_server": true, 00:21:01.568 "enable_zerocopy_send_client": false, 00:21:01.568 "zerocopy_threshold": 0, 00:21:01.568 "tls_version": 0, 00:21:01.568 "enable_ktls": false 00:21:01.568 } 00:21:01.568 } 00:21:01.568 ] 00:21:01.568 }, 00:21:01.568 { 00:21:01.568 "subsystem": "vmd", 00:21:01.568 "config": [] 00:21:01.568 }, 00:21:01.568 { 00:21:01.568 "subsystem": "accel", 00:21:01.568 "config": [ 00:21:01.568 { 00:21:01.568 "method": "accel_set_options", 00:21:01.568 "params": { 00:21:01.568 "small_cache_size": 128, 00:21:01.568 "large_cache_size": 16, 00:21:01.568 "task_count": 2048, 00:21:01.568 "sequence_count": 2048, 00:21:01.568 "buf_count": 2048 00:21:01.568 } 00:21:01.568 } 00:21:01.568 ] 00:21:01.568 }, 00:21:01.568 { 00:21:01.568 "subsystem": "bdev", 00:21:01.568 "config": [ 00:21:01.568 { 00:21:01.568 "method": "bdev_set_options", 00:21:01.568 "params": { 00:21:01.568 "bdev_io_pool_size": 65535, 00:21:01.568 "bdev_io_cache_size": 256, 00:21:01.568 "bdev_auto_examine": true, 00:21:01.568 "iobuf_small_cache_size": 128, 00:21:01.568 "iobuf_large_cache_size": 16 00:21:01.568 } 00:21:01.568 }, 00:21:01.568 { 00:21:01.568 "method": "bdev_raid_set_options", 00:21:01.568 "params": { 00:21:01.568 "process_window_size_kb": 1024 00:21:01.568 } 00:21:01.568 }, 00:21:01.568 { 00:21:01.568 "method": "bdev_iscsi_set_options", 00:21:01.568 "params": { 00:21:01.568 "timeout_sec": 30 00:21:01.568 } 00:21:01.568 }, 00:21:01.568 { 00:21:01.568 "method": "bdev_nvme_set_options", 00:21:01.568 "params": { 00:21:01.568 "action_on_timeout": "none", 00:21:01.568 "timeout_us": 0, 00:21:01.568 "timeout_admin_us": 0, 00:21:01.568 "keep_alive_timeout_ms": 10000, 00:21:01.568 "arbitration_burst": 0, 00:21:01.568 "low_priority_weight": 0, 00:21:01.568 "medium_priority_weight": 0, 00:21:01.568 "high_priority_weight": 0, 00:21:01.568 "nvme_adminq_poll_period_us": 10000, 00:21:01.568 "nvme_ioq_poll_period_us": 0, 00:21:01.568 "io_queue_requests": 0, 00:21:01.568 "delay_cmd_submit": true, 00:21:01.568 "transport_retry_count": 4, 00:21:01.568 "bdev_retry_count": 3, 00:21:01.568 "transport_ack_timeout": 0, 00:21:01.568 "ctrlr_loss_timeout_sec": 0, 00:21:01.568 "reconnect_delay_sec": 0, 00:21:01.568 "fast_io_fail_timeout_sec": 0, 00:21:01.568 "disable_auto_failback": false, 00:21:01.568 "generate_uuids": false, 00:21:01.568 "transport_tos": 0, 00:21:01.568 "nvme_error_stat": false, 00:21:01.568 "rdma_srq_size": 0, 00:21:01.568 "io_path_stat": false, 00:21:01.568 "allow_accel_sequence": false, 00:21:01.568 "rdma_max_cq_size": 0, 00:21:01.568 "rdma_cm_event_timeout_ms": 0, 00:21:01.568 "dhchap_digests": [ 00:21:01.568 "sha256", 00:21:01.568 "sha384", 00:21:01.568 "sha512" 00:21:01.568 ], 00:21:01.568 "dhchap_dhgroups": [ 00:21:01.568 "null", 00:21:01.568 "ffdhe2048", 00:21:01.568 "ffdhe3072", 00:21:01.568 "ffdhe4096", 00:21:01.568 "ffdhe6144", 00:21:01.568 "ffdhe8192" 00:21:01.568 ] 00:21:01.568 } 00:21:01.568 }, 00:21:01.568 { 00:21:01.568 "method": "bdev_nvme_set_hotplug", 00:21:01.568 "params": { 00:21:01.568 "period_us": 100000, 00:21:01.568 "enable": false 00:21:01.568 } 00:21:01.568 }, 00:21:01.568 { 00:21:01.568 "method": "bdev_malloc_create", 00:21:01.568 "params": { 00:21:01.568 "name": "malloc0", 00:21:01.568 "num_blocks": 8192, 00:21:01.568 "block_size": 4096, 00:21:01.568 "physical_block_size": 4096, 00:21:01.568 "uuid": "862400be-400b-4560-b66e-9dbd2bab9dbb", 00:21:01.568 "optimal_io_boundary": 0 00:21:01.568 } 00:21:01.568 }, 00:21:01.568 { 00:21:01.568 "method": "bdev_wait_for_examine" 00:21:01.568 } 00:21:01.568 ] 00:21:01.568 }, 00:21:01.568 { 00:21:01.568 "subsystem": "nbd", 00:21:01.568 "config": [] 00:21:01.568 }, 00:21:01.568 { 00:21:01.568 "subsystem": "scheduler", 00:21:01.568 "config": [ 00:21:01.568 { 00:21:01.568 "method": "framework_set_scheduler", 00:21:01.568 "params": { 00:21:01.568 "name": "static" 00:21:01.568 } 00:21:01.568 } 00:21:01.568 ] 00:21:01.568 }, 00:21:01.568 { 00:21:01.568 "subsystem": "nvmf", 00:21:01.568 "config": [ 00:21:01.568 { 00:21:01.568 "method": "nvmf_set_config", 00:21:01.568 "params": { 00:21:01.568 "discovery_filter": "match_any", 00:21:01.568 "admin_cmd_passthru": { 00:21:01.568 "identify_ctrlr": false 00:21:01.568 } 00:21:01.568 } 00:21:01.568 }, 00:21:01.568 { 00:21:01.568 "method": "nvmf_set_max_subsystems", 00:21:01.568 "params": { 00:21:01.568 "max_subsystems": 1024 00:21:01.568 } 00:21:01.568 }, 00:21:01.568 { 00:21:01.569 "method": "nvmf_set_crdt", 00:21:01.569 "params": { 00:21:01.569 "crdt1": 0, 00:21:01.569 "crdt2": 0, 00:21:01.569 "crdt3": 0 00:21:01.569 } 00:21:01.569 }, 00:21:01.569 { 00:21:01.569 "method": "nvmf_create_transport", 00:21:01.569 "params": { 00:21:01.569 "trtype": "TCP", 00:21:01.569 "max_queue_depth": 128, 00:21:01.569 "max_io_qpairs_per_ctrlr": 127, 00:21:01.569 "in_capsule_data_size": 4096, 00:21:01.569 "max_io_size": 131072, 00:21:01.569 "io_unit_size": 131072, 00:21:01.569 "max_aq_depth": 128, 00:21:01.569 "num_shared_buffers": 511, 00:21:01.569 "buf_cache_size": 4294967295, 00:21:01.569 "dif_insert_or_strip": false, 00:21:01.569 "zcopy": false, 00:21:01.569 "c2h_success": false, 00:21:01.569 "sock_priority": 0, 00:21:01.569 "abort_timeout_sec": 1, 00:21:01.569 "ack_timeout": 0, 00:21:01.569 "data_wr_pool_size": 0 00:21:01.569 } 00:21:01.569 }, 00:21:01.569 { 00:21:01.569 "method": "nvmf_create_subsystem", 00:21:01.569 "params": { 00:21:01.569 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.569 "allow_any_host": false, 00:21:01.569 "serial_number": "SPDK00000000000001", 00:21:01.569 "model_number": "SPDK bdev Controller", 00:21:01.569 "max_namespaces": 10, 00:21:01.569 "min_cntlid": 1, 00:21:01.569 "max_cntlid": 65519, 00:21:01.569 "ana_reporting": false 00:21:01.569 } 00:21:01.569 }, 00:21:01.569 { 00:21:01.569 "method": "nvmf_subsystem_add_host", 00:21:01.569 "params": { 00:21:01.569 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.569 "host": "nqn.2016-06.io.spdk:host1", 00:21:01.569 "psk": "/tmp/tmp.g1HZ5lEdaB" 00:21:01.569 } 00:21:01.569 }, 00:21:01.569 { 00:21:01.569 "method": "nvmf_subsystem_add_ns", 00:21:01.569 "params": { 00:21:01.569 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.569 "namespace": { 00:21:01.569 "nsid": 1, 00:21:01.569 "bdev_name": "malloc0", 00:21:01.569 "nguid": "862400BE400B4560B66E9DBD2BAB9DBB", 00:21:01.569 "uuid": "862400be-400b-4560-b66e-9dbd2bab9dbb", 00:21:01.569 "no_auto_visible": false 00:21:01.569 } 00:21:01.569 } 00:21:01.569 }, 00:21:01.569 { 00:21:01.569 "method": "nvmf_subsystem_add_listener", 00:21:01.569 "params": { 00:21:01.569 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.569 "listen_address": { 00:21:01.569 "trtype": "TCP", 00:21:01.569 "adrfam": "IPv4", 00:21:01.569 "traddr": "10.0.0.2", 00:21:01.569 "trsvcid": "4420" 00:21:01.569 }, 00:21:01.569 "secure_channel": true 00:21:01.569 } 00:21:01.569 } 00:21:01.569 ] 00:21:01.569 } 00:21:01.569 ] 00:21:01.569 }' 00:21:01.569 01:21:23 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:01.829 01:21:24 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:21:01.829 "subsystems": [ 00:21:01.829 { 00:21:01.829 "subsystem": "keyring", 00:21:01.829 "config": [] 00:21:01.830 }, 00:21:01.830 { 00:21:01.830 "subsystem": "iobuf", 00:21:01.830 "config": [ 00:21:01.830 { 00:21:01.830 "method": "iobuf_set_options", 00:21:01.830 "params": { 00:21:01.830 "small_pool_count": 8192, 00:21:01.830 "large_pool_count": 1024, 00:21:01.830 "small_bufsize": 8192, 00:21:01.830 "large_bufsize": 135168 00:21:01.830 } 00:21:01.830 } 00:21:01.830 ] 00:21:01.830 }, 00:21:01.830 { 00:21:01.830 "subsystem": "sock", 00:21:01.830 "config": [ 00:21:01.830 { 00:21:01.830 "method": "sock_set_default_impl", 00:21:01.830 "params": { 00:21:01.830 "impl_name": "posix" 00:21:01.830 } 00:21:01.830 }, 00:21:01.830 { 00:21:01.830 "method": "sock_impl_set_options", 00:21:01.830 "params": { 00:21:01.830 "impl_name": "ssl", 00:21:01.830 "recv_buf_size": 4096, 00:21:01.830 "send_buf_size": 4096, 00:21:01.830 "enable_recv_pipe": true, 00:21:01.830 "enable_quickack": false, 00:21:01.830 "enable_placement_id": 0, 00:21:01.830 "enable_zerocopy_send_server": true, 00:21:01.830 "enable_zerocopy_send_client": false, 00:21:01.830 "zerocopy_threshold": 0, 00:21:01.830 "tls_version": 0, 00:21:01.830 "enable_ktls": false 00:21:01.830 } 00:21:01.830 }, 00:21:01.830 { 00:21:01.830 "method": "sock_impl_set_options", 00:21:01.830 "params": { 00:21:01.830 "impl_name": "posix", 00:21:01.830 "recv_buf_size": 2097152, 00:21:01.830 "send_buf_size": 2097152, 00:21:01.830 "enable_recv_pipe": true, 00:21:01.830 "enable_quickack": false, 00:21:01.830 "enable_placement_id": 0, 00:21:01.830 "enable_zerocopy_send_server": true, 00:21:01.830 "enable_zerocopy_send_client": false, 00:21:01.830 "zerocopy_threshold": 0, 00:21:01.830 "tls_version": 0, 00:21:01.830 "enable_ktls": false 00:21:01.830 } 00:21:01.830 } 00:21:01.830 ] 00:21:01.830 }, 00:21:01.830 { 00:21:01.830 "subsystem": "vmd", 00:21:01.830 "config": [] 00:21:01.830 }, 00:21:01.830 { 00:21:01.830 "subsystem": "accel", 00:21:01.830 "config": [ 00:21:01.830 { 00:21:01.830 "method": "accel_set_options", 00:21:01.830 "params": { 00:21:01.830 "small_cache_size": 128, 00:21:01.830 "large_cache_size": 16, 00:21:01.830 "task_count": 2048, 00:21:01.830 "sequence_count": 2048, 00:21:01.830 "buf_count": 2048 00:21:01.830 } 00:21:01.830 } 00:21:01.830 ] 00:21:01.830 }, 00:21:01.830 { 00:21:01.830 "subsystem": "bdev", 00:21:01.830 "config": [ 00:21:01.830 { 00:21:01.830 "method": "bdev_set_options", 00:21:01.830 "params": { 00:21:01.830 "bdev_io_pool_size": 65535, 00:21:01.830 "bdev_io_cache_size": 256, 00:21:01.830 "bdev_auto_examine": true, 00:21:01.830 "iobuf_small_cache_size": 128, 00:21:01.830 "iobuf_large_cache_size": 16 00:21:01.830 } 00:21:01.830 }, 00:21:01.830 { 00:21:01.830 "method": "bdev_raid_set_options", 00:21:01.830 "params": { 00:21:01.830 "process_window_size_kb": 1024 00:21:01.830 } 00:21:01.830 }, 00:21:01.830 { 00:21:01.830 "method": "bdev_iscsi_set_options", 00:21:01.830 "params": { 00:21:01.830 "timeout_sec": 30 00:21:01.830 } 00:21:01.830 }, 00:21:01.830 { 00:21:01.830 "method": "bdev_nvme_set_options", 00:21:01.830 "params": { 00:21:01.830 "action_on_timeout": "none", 00:21:01.830 "timeout_us": 0, 00:21:01.830 "timeout_admin_us": 0, 00:21:01.830 "keep_alive_timeout_ms": 10000, 00:21:01.830 "arbitration_burst": 0, 00:21:01.830 "low_priority_weight": 0, 00:21:01.830 "medium_priority_weight": 0, 00:21:01.830 "high_priority_weight": 0, 00:21:01.830 "nvme_adminq_poll_period_us": 10000, 00:21:01.830 "nvme_ioq_poll_period_us": 0, 00:21:01.830 "io_queue_requests": 512, 00:21:01.830 "delay_cmd_submit": true, 00:21:01.830 "transport_retry_count": 4, 00:21:01.830 "bdev_retry_count": 3, 00:21:01.830 "transport_ack_timeout": 0, 00:21:01.830 "ctrlr_loss_timeout_sec": 0, 00:21:01.830 "reconnect_delay_sec": 0, 00:21:01.830 "fast_io_fail_timeout_sec": 0, 00:21:01.830 "disable_auto_failback": false, 00:21:01.830 "generate_uuids": false, 00:21:01.830 "transport_tos": 0, 00:21:01.830 "nvme_error_stat": false, 00:21:01.830 "rdma_srq_size": 0, 00:21:01.830 "io_path_stat": false, 00:21:01.830 "allow_accel_sequence": false, 00:21:01.830 "rdma_max_cq_size": 0, 00:21:01.830 "rdma_cm_event_timeout_ms": 0, 00:21:01.830 "dhchap_digests": [ 00:21:01.830 "sha256", 00:21:01.830 "sha384", 00:21:01.830 "sha512" 00:21:01.830 ], 00:21:01.830 "dhchap_dhgroups": [ 00:21:01.830 "null", 00:21:01.830 "ffdhe2048", 00:21:01.830 "ffdhe3072", 00:21:01.830 "ffdhe4096", 00:21:01.830 "ffdhe6144", 00:21:01.830 "ffdhe8192" 00:21:01.830 ] 00:21:01.830 } 00:21:01.830 }, 00:21:01.830 { 00:21:01.830 "method": "bdev_nvme_attach_controller", 00:21:01.830 "params": { 00:21:01.830 "name": "TLSTEST", 00:21:01.830 "trtype": "TCP", 00:21:01.830 "adrfam": "IPv4", 00:21:01.830 "traddr": "10.0.0.2", 00:21:01.830 "trsvcid": "4420", 00:21:01.830 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.830 "prchk_reftag": false, 00:21:01.830 "prchk_guard": false, 00:21:01.830 "ctrlr_loss_timeout_sec": 0, 00:21:01.830 "reconnect_delay_sec": 0, 00:21:01.830 "fast_io_fail_timeout_sec": 0, 00:21:01.830 "psk": "/tmp/tmp.g1HZ5lEdaB", 00:21:01.830 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:01.830 "hdgst": false, 00:21:01.830 "ddgst": false 00:21:01.830 } 00:21:01.830 }, 00:21:01.830 { 00:21:01.830 "method": "bdev_nvme_set_hotplug", 00:21:01.830 "params": { 00:21:01.830 "period_us": 100000, 00:21:01.830 "enable": false 00:21:01.830 } 00:21:01.830 }, 00:21:01.830 { 00:21:01.830 "method": "bdev_wait_for_examine" 00:21:01.830 } 00:21:01.830 ] 00:21:01.830 }, 00:21:01.830 { 00:21:01.830 "subsystem": "nbd", 00:21:01.830 "config": [] 00:21:01.830 } 00:21:01.830 ] 00:21:01.830 }' 00:21:01.830 01:21:24 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 933132 00:21:01.830 01:21:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 933132 ']' 00:21:01.830 01:21:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 933132 00:21:01.830 01:21:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:01.830 01:21:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:01.830 01:21:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 933132 00:21:01.830 01:21:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:01.830 01:21:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:01.830 01:21:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 933132' 00:21:01.830 killing process with pid 933132 00:21:01.830 01:21:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 933132 00:21:01.830 Received shutdown signal, test time was about 10.000000 seconds 00:21:01.830 00:21:01.830 Latency(us) 00:21:01.830 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.830 =================================================================================================================== 00:21:01.830 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:01.830 [2024-07-25 01:21:24.236845] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:01.830 01:21:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 933132 00:21:02.091 01:21:24 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 932788 00:21:02.091 01:21:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 932788 ']' 00:21:02.091 01:21:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 932788 00:21:02.091 01:21:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:02.091 01:21:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:02.091 01:21:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 932788 00:21:02.091 01:21:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:02.091 01:21:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:02.091 01:21:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 932788' 00:21:02.091 killing process with pid 932788 00:21:02.091 01:21:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 932788 00:21:02.091 [2024-07-25 01:21:24.466019] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:02.091 01:21:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 932788 00:21:02.351 01:21:24 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:02.351 01:21:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:02.351 01:21:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:02.351 01:21:24 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:21:02.351 "subsystems": [ 00:21:02.351 { 00:21:02.351 "subsystem": "keyring", 00:21:02.351 "config": [] 00:21:02.351 }, 00:21:02.351 { 00:21:02.351 "subsystem": "iobuf", 00:21:02.351 "config": [ 00:21:02.351 { 00:21:02.351 "method": "iobuf_set_options", 00:21:02.351 "params": { 00:21:02.351 "small_pool_count": 8192, 00:21:02.351 "large_pool_count": 1024, 00:21:02.351 "small_bufsize": 8192, 00:21:02.351 "large_bufsize": 135168 00:21:02.351 } 00:21:02.351 } 00:21:02.351 ] 00:21:02.351 }, 00:21:02.351 { 00:21:02.351 "subsystem": "sock", 00:21:02.351 "config": [ 00:21:02.351 { 00:21:02.351 "method": "sock_set_default_impl", 00:21:02.351 "params": { 00:21:02.351 "impl_name": "posix" 00:21:02.351 } 00:21:02.351 }, 00:21:02.351 { 00:21:02.351 "method": "sock_impl_set_options", 00:21:02.351 "params": { 00:21:02.351 "impl_name": "ssl", 00:21:02.351 "recv_buf_size": 4096, 00:21:02.351 "send_buf_size": 4096, 00:21:02.351 "enable_recv_pipe": true, 00:21:02.351 "enable_quickack": false, 00:21:02.351 "enable_placement_id": 0, 00:21:02.351 "enable_zerocopy_send_server": true, 00:21:02.351 "enable_zerocopy_send_client": false, 00:21:02.351 "zerocopy_threshold": 0, 00:21:02.351 "tls_version": 0, 00:21:02.351 "enable_ktls": false 00:21:02.351 } 00:21:02.351 }, 00:21:02.351 { 00:21:02.351 "method": "sock_impl_set_options", 00:21:02.351 "params": { 00:21:02.351 "impl_name": "posix", 00:21:02.351 "recv_buf_size": 2097152, 00:21:02.351 "send_buf_size": 2097152, 00:21:02.351 "enable_recv_pipe": true, 00:21:02.351 "enable_quickack": false, 00:21:02.351 "enable_placement_id": 0, 00:21:02.351 "enable_zerocopy_send_server": true, 00:21:02.351 "enable_zerocopy_send_client": false, 00:21:02.351 "zerocopy_threshold": 0, 00:21:02.351 "tls_version": 0, 00:21:02.351 "enable_ktls": false 00:21:02.351 } 00:21:02.351 } 00:21:02.351 ] 00:21:02.351 }, 00:21:02.351 { 00:21:02.351 "subsystem": "vmd", 00:21:02.351 "config": [] 00:21:02.351 }, 00:21:02.351 { 00:21:02.351 "subsystem": "accel", 00:21:02.351 "config": [ 00:21:02.351 { 00:21:02.351 "method": "accel_set_options", 00:21:02.351 "params": { 00:21:02.351 "small_cache_size": 128, 00:21:02.351 "large_cache_size": 16, 00:21:02.351 "task_count": 2048, 00:21:02.351 "sequence_count": 2048, 00:21:02.351 "buf_count": 2048 00:21:02.351 } 00:21:02.351 } 00:21:02.351 ] 00:21:02.351 }, 00:21:02.351 { 00:21:02.351 "subsystem": "bdev", 00:21:02.351 "config": [ 00:21:02.351 { 00:21:02.351 "method": "bdev_set_options", 00:21:02.351 "params": { 00:21:02.351 "bdev_io_pool_size": 65535, 00:21:02.351 "bdev_io_cache_size": 256, 00:21:02.351 "bdev_auto_examine": true, 00:21:02.351 "iobuf_small_cache_size": 128, 00:21:02.351 "iobuf_large_cache_size": 16 00:21:02.351 } 00:21:02.351 }, 00:21:02.351 { 00:21:02.351 "method": "bdev_raid_set_options", 00:21:02.351 "params": { 00:21:02.351 "process_window_size_kb": 1024 00:21:02.351 } 00:21:02.351 }, 00:21:02.351 { 00:21:02.352 "method": "bdev_iscsi_set_options", 00:21:02.352 "params": { 00:21:02.352 "timeout_sec": 30 00:21:02.352 } 00:21:02.352 }, 00:21:02.352 { 00:21:02.352 "method": "bdev_nvme_set_options", 00:21:02.352 "params": { 00:21:02.352 "action_on_timeout": "none", 00:21:02.352 "timeout_us": 0, 00:21:02.352 "timeout_admin_us": 0, 00:21:02.352 "keep_alive_timeout_ms": 10000, 00:21:02.352 "arbitration_burst": 0, 00:21:02.352 "low_priority_weight": 0, 00:21:02.352 "medium_priority_weight": 0, 00:21:02.352 "high_priority_weight": 0, 00:21:02.352 "nvme_adminq_poll_period_us": 10000, 00:21:02.352 "nvme_ioq_poll_period_us": 0, 00:21:02.352 "io_queue_requests": 0, 00:21:02.352 "delay_cmd_submit": true, 00:21:02.352 "transport_retry_count": 4, 00:21:02.352 "bdev_retry_count": 3, 00:21:02.352 "transport_ack_timeout": 0, 00:21:02.352 "ctrlr_loss_timeout_sec": 0, 00:21:02.352 "reconnect_delay_sec": 0, 00:21:02.352 "fast_io_fail_timeout_sec": 0, 00:21:02.352 "disable_auto_failback": false, 00:21:02.352 "generate_uuids": false, 00:21:02.352 "transport_tos": 0, 00:21:02.352 "nvme_error_stat": false, 00:21:02.352 "rdma_srq_size": 0, 00:21:02.352 "io_path_stat": false, 00:21:02.352 "allow_accel_sequence": false, 00:21:02.352 "rdma_max_cq_size": 0, 00:21:02.352 "rdma_cm_event_timeout_ms": 0, 00:21:02.352 "dhchap_digests": [ 00:21:02.352 "sha256", 00:21:02.352 "sha384", 00:21:02.352 "sha512" 00:21:02.352 ], 00:21:02.352 "dhchap_dhgroups": [ 00:21:02.352 "null", 00:21:02.352 "ffdhe2048", 00:21:02.352 "ffdhe3072", 00:21:02.352 "ffdhe4096", 00:21:02.352 "ffdhe6144", 00:21:02.352 "ffdhe8192" 00:21:02.352 ] 00:21:02.352 } 00:21:02.352 }, 00:21:02.352 { 00:21:02.352 "method": "bdev_nvme_set_hotplug", 00:21:02.352 "params": { 00:21:02.352 "period_us": 100000, 00:21:02.352 "enable": false 00:21:02.352 } 00:21:02.352 }, 00:21:02.352 { 00:21:02.352 "method": "bdev_malloc_create", 00:21:02.352 "params": { 00:21:02.352 "name": "malloc0", 00:21:02.352 "num_blocks": 8192, 00:21:02.352 "block_size": 4096, 00:21:02.352 "physical_block_size": 4096, 00:21:02.352 "uuid": "862400be-400b-4560-b66e-9dbd2bab9dbb", 00:21:02.352 "optimal_io_boundary": 0 00:21:02.352 } 00:21:02.352 }, 00:21:02.352 { 00:21:02.352 "method": "bdev_wait_for_examine" 00:21:02.352 } 00:21:02.352 ] 00:21:02.352 }, 00:21:02.352 { 00:21:02.352 "subsystem": "nbd", 00:21:02.352 "config": [] 00:21:02.352 }, 00:21:02.352 { 00:21:02.352 "subsystem": "scheduler", 00:21:02.352 "config": [ 00:21:02.352 { 00:21:02.352 "method": "framework_set_scheduler", 00:21:02.352 "params": { 00:21:02.352 "name": "static" 00:21:02.352 } 00:21:02.352 } 00:21:02.352 ] 00:21:02.352 }, 00:21:02.352 { 00:21:02.352 "subsystem": "nvmf", 00:21:02.352 "config": [ 00:21:02.352 { 00:21:02.352 "method": "nvmf_set_config", 00:21:02.352 "params": { 00:21:02.352 "discovery_filter": "match_any", 00:21:02.352 "admin_cmd_passthru": { 00:21:02.352 "identify_ctrlr": false 00:21:02.352 } 00:21:02.352 } 00:21:02.352 }, 00:21:02.352 { 00:21:02.352 "method": "nvmf_set_max_subsystems", 00:21:02.352 "params": { 00:21:02.352 "max_subsystems": 1024 00:21:02.352 } 00:21:02.352 }, 00:21:02.352 { 00:21:02.352 "method": "nvmf_set_crdt", 00:21:02.352 "params": { 00:21:02.352 "crdt1": 0, 00:21:02.352 "crdt2": 0, 00:21:02.352 "crdt3": 0 00:21:02.352 } 00:21:02.352 }, 00:21:02.352 { 00:21:02.352 "method": "nvmf_create_transport", 00:21:02.352 "params": { 00:21:02.352 "trtype": "TCP", 00:21:02.352 "max_queue_depth": 128, 00:21:02.352 "max_io_qpairs_per_ctrlr": 127, 00:21:02.352 "in_capsule_data_size": 4096, 00:21:02.352 "max_io_size": 131072, 00:21:02.352 "io_unit_size": 131072, 00:21:02.352 "max_aq_depth": 128, 00:21:02.352 "num_shared_buffers": 511, 00:21:02.352 "buf_cache_size": 4294967295, 00:21:02.352 "dif_insert_or_strip": false, 00:21:02.352 "zcopy": false, 00:21:02.352 "c2h_success": false, 00:21:02.352 "sock_priority": 0, 00:21:02.352 "abort_timeout_sec": 1, 00:21:02.352 "ack_timeout": 0, 00:21:02.352 "data_wr_pool_size": 0 00:21:02.352 } 00:21:02.352 }, 00:21:02.352 { 00:21:02.352 "method": "nvmf_create_subsystem", 00:21:02.352 "params": { 00:21:02.352 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:02.352 "allow_any_host": false, 00:21:02.352 "serial_number": "SPDK00000000000001", 00:21:02.352 "model_number": "SPDK bdev Controller", 00:21:02.352 "max_namespaces": 10, 00:21:02.352 "min_cntlid": 1, 00:21:02.352 "max_cntlid": 65519, 00:21:02.352 "ana_reporting": false 00:21:02.352 } 00:21:02.352 }, 00:21:02.352 { 00:21:02.352 "method": "nvmf_subsystem_add_host", 00:21:02.352 "params": { 00:21:02.352 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:02.352 "host": "nqn.2016-06.io.spdk:host1", 00:21:02.352 "psk": "/tmp/tmp.g1HZ5lEdaB" 00:21:02.352 } 00:21:02.352 }, 00:21:02.352 { 00:21:02.352 "method": "nvmf_subsystem_add_ns", 00:21:02.352 "params": { 00:21:02.352 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:02.352 "namespace": { 00:21:02.352 "nsid": 1, 00:21:02.352 "bdev_name": "malloc0", 00:21:02.352 "nguid": "862400BE400B4560B66E9DBD2BAB9DBB", 00:21:02.352 "uuid": "862400be-400b-4560-b66e-9dbd2bab9dbb", 00:21:02.352 "no_auto_visible": false 00:21:02.352 } 00:21:02.352 } 00:21:02.352 }, 00:21:02.352 { 00:21:02.352 "method": "nvmf_subsystem_add_listener", 00:21:02.352 "params": { 00:21:02.352 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:02.352 "listen_address": { 00:21:02.352 "trtype": "TCP", 00:21:02.352 "adrfam": "IPv4", 00:21:02.352 "traddr": "10.0.0.2", 00:21:02.352 "trsvcid": "4420" 00:21:02.352 }, 00:21:02.352 "secure_channel": true 00:21:02.352 } 00:21:02.352 } 00:21:02.352 ] 00:21:02.352 } 00:21:02.352 ] 00:21:02.352 }' 00:21:02.352 01:21:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.352 01:21:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=933376 00:21:02.352 01:21:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 933376 00:21:02.352 01:21:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:02.352 01:21:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 933376 ']' 00:21:02.352 01:21:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.352 01:21:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:02.352 01:21:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.352 01:21:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:02.352 01:21:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.352 [2024-07-25 01:21:24.714853] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:21:02.352 [2024-07-25 01:21:24.714898] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:02.352 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.352 [2024-07-25 01:21:24.773317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.612 [2024-07-25 01:21:24.850776] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:02.612 [2024-07-25 01:21:24.850818] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:02.612 [2024-07-25 01:21:24.850826] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:02.612 [2024-07-25 01:21:24.850831] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:02.612 [2024-07-25 01:21:24.850837] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:02.612 [2024-07-25 01:21:24.850900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:02.612 [2024-07-25 01:21:25.054283] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:02.612 [2024-07-25 01:21:25.085557] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:02.612 [2024-07-25 01:21:25.101608] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:02.612 [2024-07-25 01:21:25.101778] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:03.181 01:21:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:03.181 01:21:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:03.181 01:21:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:03.181 01:21:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:03.181 01:21:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:03.181 01:21:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:03.181 01:21:25 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=933583 00:21:03.181 01:21:25 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 933583 /var/tmp/bdevperf.sock 00:21:03.181 01:21:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 933583 ']' 00:21:03.181 01:21:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:03.181 01:21:25 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:03.181 01:21:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:03.181 01:21:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:03.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:03.181 01:21:25 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:21:03.181 "subsystems": [ 00:21:03.181 { 00:21:03.181 "subsystem": "keyring", 00:21:03.181 "config": [] 00:21:03.181 }, 00:21:03.181 { 00:21:03.181 "subsystem": "iobuf", 00:21:03.181 "config": [ 00:21:03.181 { 00:21:03.181 "method": "iobuf_set_options", 00:21:03.181 "params": { 00:21:03.181 "small_pool_count": 8192, 00:21:03.181 "large_pool_count": 1024, 00:21:03.181 "small_bufsize": 8192, 00:21:03.181 "large_bufsize": 135168 00:21:03.181 } 00:21:03.181 } 00:21:03.181 ] 00:21:03.181 }, 00:21:03.181 { 00:21:03.181 "subsystem": "sock", 00:21:03.181 "config": [ 00:21:03.181 { 00:21:03.181 "method": "sock_set_default_impl", 00:21:03.181 "params": { 00:21:03.181 "impl_name": "posix" 00:21:03.181 } 00:21:03.181 }, 00:21:03.181 { 00:21:03.181 "method": "sock_impl_set_options", 00:21:03.181 "params": { 00:21:03.181 "impl_name": "ssl", 00:21:03.181 "recv_buf_size": 4096, 00:21:03.181 "send_buf_size": 4096, 00:21:03.181 "enable_recv_pipe": true, 00:21:03.181 "enable_quickack": false, 00:21:03.181 "enable_placement_id": 0, 00:21:03.181 "enable_zerocopy_send_server": true, 00:21:03.181 "enable_zerocopy_send_client": false, 00:21:03.181 "zerocopy_threshold": 0, 00:21:03.181 "tls_version": 0, 00:21:03.181 "enable_ktls": false 00:21:03.181 } 00:21:03.181 }, 00:21:03.181 { 00:21:03.181 "method": "sock_impl_set_options", 00:21:03.181 "params": { 00:21:03.181 "impl_name": "posix", 00:21:03.181 "recv_buf_size": 2097152, 00:21:03.181 "send_buf_size": 2097152, 00:21:03.181 "enable_recv_pipe": true, 00:21:03.181 "enable_quickack": false, 00:21:03.181 "enable_placement_id": 0, 00:21:03.181 "enable_zerocopy_send_server": true, 00:21:03.181 "enable_zerocopy_send_client": false, 00:21:03.181 "zerocopy_threshold": 0, 00:21:03.181 "tls_version": 0, 00:21:03.181 "enable_ktls": false 00:21:03.181 } 00:21:03.181 } 00:21:03.181 ] 00:21:03.181 }, 00:21:03.181 { 00:21:03.181 "subsystem": "vmd", 00:21:03.181 "config": [] 00:21:03.181 }, 00:21:03.181 { 00:21:03.181 "subsystem": "accel", 00:21:03.181 "config": [ 00:21:03.181 { 00:21:03.181 "method": "accel_set_options", 00:21:03.181 "params": { 00:21:03.181 "small_cache_size": 128, 00:21:03.181 "large_cache_size": 16, 00:21:03.181 "task_count": 2048, 00:21:03.181 "sequence_count": 2048, 00:21:03.181 "buf_count": 2048 00:21:03.181 } 00:21:03.181 } 00:21:03.181 ] 00:21:03.181 }, 00:21:03.181 { 00:21:03.181 "subsystem": "bdev", 00:21:03.181 "config": [ 00:21:03.181 { 00:21:03.181 "method": "bdev_set_options", 00:21:03.181 "params": { 00:21:03.181 "bdev_io_pool_size": 65535, 00:21:03.181 "bdev_io_cache_size": 256, 00:21:03.181 "bdev_auto_examine": true, 00:21:03.181 "iobuf_small_cache_size": 128, 00:21:03.181 "iobuf_large_cache_size": 16 00:21:03.181 } 00:21:03.181 }, 00:21:03.181 { 00:21:03.181 "method": "bdev_raid_set_options", 00:21:03.181 "params": { 00:21:03.181 "process_window_size_kb": 1024 00:21:03.181 } 00:21:03.181 }, 00:21:03.181 { 00:21:03.181 "method": "bdev_iscsi_set_options", 00:21:03.181 "params": { 00:21:03.181 "timeout_sec": 30 00:21:03.181 } 00:21:03.181 }, 00:21:03.182 { 00:21:03.182 "method": "bdev_nvme_set_options", 00:21:03.182 "params": { 00:21:03.182 "action_on_timeout": "none", 00:21:03.182 "timeout_us": 0, 00:21:03.182 "timeout_admin_us": 0, 00:21:03.182 "keep_alive_timeout_ms": 10000, 00:21:03.182 "arbitration_burst": 0, 00:21:03.182 "low_priority_weight": 0, 00:21:03.182 "medium_priority_weight": 0, 00:21:03.182 "high_priority_weight": 0, 00:21:03.182 "nvme_adminq_poll_period_us": 10000, 00:21:03.182 "nvme_ioq_poll_period_us": 0, 00:21:03.182 "io_queue_requests": 512, 00:21:03.182 "delay_cmd_submit": true, 00:21:03.182 "transport_retry_count": 4, 00:21:03.182 "bdev_retry_count": 3, 00:21:03.182 "transport_ack_timeout": 0, 00:21:03.182 "ctrlr_loss_timeout_sec": 0, 00:21:03.182 "reconnect_delay_sec": 0, 00:21:03.182 "fast_io_fail_timeout_sec": 0, 00:21:03.182 "disable_auto_failback": false, 00:21:03.182 "generate_uuids": false, 00:21:03.182 "transport_tos": 0, 00:21:03.182 "nvme_error_stat": false, 00:21:03.182 "rdma_srq_size": 0, 00:21:03.182 "io_path_stat": false, 00:21:03.182 "allow_accel_sequence": false, 00:21:03.182 "rdma_max_cq_size": 0, 00:21:03.182 "rdma_cm_event_timeout_ms": 0, 00:21:03.182 "dhchap_digests": [ 00:21:03.182 "sha256", 00:21:03.182 "sha384", 00:21:03.182 "sha512" 00:21:03.182 ], 00:21:03.182 "dhchap_dhgroups": [ 00:21:03.182 "null", 00:21:03.182 "ffdhe2048", 00:21:03.182 "ffdhe3072", 00:21:03.182 "ffdhe4096", 00:21:03.182 "ffdhe6144", 00:21:03.182 "ffdhe8192" 00:21:03.182 ] 00:21:03.182 } 00:21:03.182 }, 00:21:03.182 { 00:21:03.182 "method": "bdev_nvme_attach_controller", 00:21:03.182 "params": { 00:21:03.182 "name": "TLSTEST", 00:21:03.182 "trtype": "TCP", 00:21:03.182 "adrfam": "IPv4", 00:21:03.182 "traddr": "10.0.0.2", 00:21:03.182 "trsvcid": "4420", 00:21:03.182 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.182 "prchk_reftag": false, 00:21:03.182 "prchk_guard": false, 00:21:03.182 "ctrlr_loss_timeout_sec": 0, 00:21:03.182 "reconnect_delay_sec": 0, 00:21:03.182 "fast_io_fail_timeout_sec": 0, 00:21:03.182 "psk": "/tmp/tmp.g1HZ5lEdaB", 00:21:03.182 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:03.182 "hdgst": false, 00:21:03.182 "ddgst": false 00:21:03.182 } 00:21:03.182 }, 00:21:03.182 { 00:21:03.182 "method": "bdev_nvme_set_hotplug", 00:21:03.182 "params": { 00:21:03.182 "period_us": 100000, 00:21:03.182 "enable": false 00:21:03.182 } 00:21:03.182 }, 00:21:03.182 { 00:21:03.182 "method": "bdev_wait_for_examine" 00:21:03.182 } 00:21:03.182 ] 00:21:03.182 }, 00:21:03.182 { 00:21:03.182 "subsystem": "nbd", 00:21:03.182 "config": [] 00:21:03.182 } 00:21:03.182 ] 00:21:03.182 }' 00:21:03.182 01:21:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:03.182 01:21:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:03.182 [2024-07-25 01:21:25.594541] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:21:03.182 [2024-07-25 01:21:25.594593] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid933583 ] 00:21:03.182 EAL: No free 2048 kB hugepages reported on node 1 00:21:03.182 [2024-07-25 01:21:25.645790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.442 [2024-07-25 01:21:25.720863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:03.442 [2024-07-25 01:21:25.863575] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:03.442 [2024-07-25 01:21:25.863651] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:04.012 01:21:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:04.012 01:21:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:04.012 01:21:26 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:04.012 Running I/O for 10 seconds... 00:21:16.232 00:21:16.232 Latency(us) 00:21:16.232 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.232 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:16.232 Verification LBA range: start 0x0 length 0x2000 00:21:16.232 TLSTESTn1 : 10.08 1373.32 5.36 0.00 0.00 92915.39 4701.50 160477.72 00:21:16.232 =================================================================================================================== 00:21:16.232 Total : 1373.32 5.36 0.00 0.00 92915.39 4701.50 160477.72 00:21:16.232 0 00:21:16.232 01:21:36 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:16.232 01:21:36 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 933583 00:21:16.232 01:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 933583 ']' 00:21:16.232 01:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 933583 00:21:16.232 01:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:16.232 01:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:16.232 01:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 933583 00:21:16.232 01:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:16.232 01:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:16.232 01:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 933583' 00:21:16.232 killing process with pid 933583 00:21:16.232 01:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 933583 00:21:16.232 Received shutdown signal, test time was about 10.000000 seconds 00:21:16.232 00:21:16.232 Latency(us) 00:21:16.232 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.232 =================================================================================================================== 00:21:16.232 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:16.232 [2024-07-25 01:21:36.643408] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:16.232 01:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 933583 00:21:16.232 01:21:36 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 933376 00:21:16.232 01:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 933376 ']' 00:21:16.232 01:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 933376 00:21:16.232 01:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:16.232 01:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:16.232 01:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 933376 00:21:16.232 01:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:16.232 01:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:16.233 01:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 933376' 00:21:16.233 killing process with pid 933376 00:21:16.233 01:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 933376 00:21:16.233 [2024-07-25 01:21:36.867316] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:16.233 01:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 933376 00:21:16.233 01:21:37 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:21:16.233 01:21:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:16.233 01:21:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:16.233 01:21:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:16.233 01:21:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=935463 00:21:16.233 01:21:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:16.233 01:21:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 935463 00:21:16.233 01:21:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 935463 ']' 00:21:16.233 01:21:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:16.233 01:21:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:16.233 01:21:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:16.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:16.233 01:21:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:16.233 01:21:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:16.233 [2024-07-25 01:21:37.110967] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:21:16.233 [2024-07-25 01:21:37.111012] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:16.233 EAL: No free 2048 kB hugepages reported on node 1 00:21:16.233 [2024-07-25 01:21:37.167155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.233 [2024-07-25 01:21:37.245083] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:16.233 [2024-07-25 01:21:37.245121] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:16.233 [2024-07-25 01:21:37.245128] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:16.233 [2024-07-25 01:21:37.245134] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:16.233 [2024-07-25 01:21:37.245139] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:16.233 [2024-07-25 01:21:37.245155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.233 01:21:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:16.233 01:21:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:16.233 01:21:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:16.233 01:21:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:16.233 01:21:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:16.233 01:21:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:16.233 01:21:37 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.g1HZ5lEdaB 00:21:16.233 01:21:37 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.g1HZ5lEdaB 00:21:16.233 01:21:37 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:16.233 [2024-07-25 01:21:38.112723] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:16.233 01:21:38 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:16.233 01:21:38 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:16.233 [2024-07-25 01:21:38.461612] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:16.233 [2024-07-25 01:21:38.461791] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:16.233 01:21:38 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:16.233 malloc0 00:21:16.233 01:21:38 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:16.493 01:21:38 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.g1HZ5lEdaB 00:21:16.493 [2024-07-25 01:21:38.958997] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:16.493 01:21:38 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=935727 00:21:16.493 01:21:38 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:16.493 01:21:38 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:16.493 01:21:38 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 935727 /var/tmp/bdevperf.sock 00:21:16.493 01:21:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 935727 ']' 00:21:16.493 01:21:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:16.493 01:21:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:16.493 01:21:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:16.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:16.493 01:21:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:16.493 01:21:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:16.753 [2024-07-25 01:21:39.019770] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:21:16.753 [2024-07-25 01:21:39.019819] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid935727 ] 00:21:16.753 EAL: No free 2048 kB hugepages reported on node 1 00:21:16.753 [2024-07-25 01:21:39.074098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.753 [2024-07-25 01:21:39.149157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:17.322 01:21:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:17.322 01:21:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:17.582 01:21:39 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.g1HZ5lEdaB 00:21:17.582 01:21:39 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:17.841 [2024-07-25 01:21:40.145374] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:17.841 nvme0n1 00:21:17.841 01:21:40 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:17.841 Running I/O for 1 seconds... 00:21:19.218 00:21:19.218 Latency(us) 00:21:19.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.218 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:19.218 Verification LBA range: start 0x0 length 0x2000 00:21:19.218 nvme0n1 : 1.08 1157.61 4.52 0.00 0.00 107493.54 5698.78 149536.06 00:21:19.218 =================================================================================================================== 00:21:19.218 Total : 1157.61 4.52 0.00 0.00 107493.54 5698.78 149536.06 00:21:19.218 0 00:21:19.218 01:21:41 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 935727 00:21:19.218 01:21:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 935727 ']' 00:21:19.218 01:21:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 935727 00:21:19.218 01:21:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:19.218 01:21:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:19.218 01:21:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 935727 00:21:19.218 01:21:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:19.218 01:21:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:19.218 01:21:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 935727' 00:21:19.218 killing process with pid 935727 00:21:19.218 01:21:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 935727 00:21:19.218 Received shutdown signal, test time was about 1.000000 seconds 00:21:19.218 00:21:19.218 Latency(us) 00:21:19.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.218 =================================================================================================================== 00:21:19.218 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:19.218 01:21:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 935727 00:21:19.218 01:21:41 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 935463 00:21:19.218 01:21:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 935463 ']' 00:21:19.218 01:21:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 935463 00:21:19.218 01:21:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:19.218 01:21:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:19.218 01:21:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 935463 00:21:19.218 01:21:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:19.218 01:21:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:19.218 01:21:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 935463' 00:21:19.218 killing process with pid 935463 00:21:19.218 01:21:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 935463 00:21:19.218 [2024-07-25 01:21:41.692887] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:19.218 01:21:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 935463 00:21:19.479 01:21:41 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:21:19.479 01:21:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:19.479 01:21:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:19.479 01:21:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:19.479 01:21:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=936202 00:21:19.479 01:21:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:19.479 01:21:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 936202 00:21:19.479 01:21:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 936202 ']' 00:21:19.479 01:21:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.479 01:21:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:19.479 01:21:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.479 01:21:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:19.479 01:21:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:19.479 [2024-07-25 01:21:41.937339] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:21:19.479 [2024-07-25 01:21:41.937387] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:19.479 EAL: No free 2048 kB hugepages reported on node 1 00:21:19.740 [2024-07-25 01:21:41.994349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.740 [2024-07-25 01:21:42.072822] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:19.740 [2024-07-25 01:21:42.072860] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:19.740 [2024-07-25 01:21:42.072868] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:19.740 [2024-07-25 01:21:42.072874] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:19.740 [2024-07-25 01:21:42.072878] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:19.740 [2024-07-25 01:21:42.072895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.309 01:21:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:20.309 01:21:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:20.309 01:21:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:20.309 01:21:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:20.309 01:21:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.309 01:21:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:20.309 01:21:42 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:21:20.309 01:21:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.309 01:21:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.309 [2024-07-25 01:21:42.791287] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:20.570 malloc0 00:21:20.570 [2024-07-25 01:21:42.819722] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:20.570 [2024-07-25 01:21:42.819899] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:20.570 01:21:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.570 01:21:42 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=936444 00:21:20.570 01:21:42 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 936444 /var/tmp/bdevperf.sock 00:21:20.570 01:21:42 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:20.570 01:21:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 936444 ']' 00:21:20.570 01:21:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:20.570 01:21:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:20.570 01:21:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:20.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:20.570 01:21:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:20.570 01:21:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.570 [2024-07-25 01:21:42.891605] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:21:20.570 [2024-07-25 01:21:42.891645] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid936444 ] 00:21:20.570 EAL: No free 2048 kB hugepages reported on node 1 00:21:20.570 [2024-07-25 01:21:42.944537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.570 [2024-07-25 01:21:43.017737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:21.509 01:21:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:21.509 01:21:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:21.509 01:21:43 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.g1HZ5lEdaB 00:21:21.509 01:21:43 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:21.827 [2024-07-25 01:21:44.021882] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:21.827 nvme0n1 00:21:21.827 01:21:44 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:21.827 Running I/O for 1 seconds... 00:21:23.209 00:21:23.209 Latency(us) 00:21:23.209 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.209 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:23.209 Verification LBA range: start 0x0 length 0x2000 00:21:23.209 nvme0n1 : 1.10 1173.58 4.58 0.00 0.00 105550.77 7123.48 163213.13 00:21:23.209 =================================================================================================================== 00:21:23.209 Total : 1173.58 4.58 0.00 0.00 105550.77 7123.48 163213.13 00:21:23.209 0 00:21:23.209 01:21:45 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:21:23.209 01:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.209 01:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:23.209 01:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.209 01:21:45 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:21:23.209 "subsystems": [ 00:21:23.209 { 00:21:23.209 "subsystem": "keyring", 00:21:23.209 "config": [ 00:21:23.209 { 00:21:23.209 "method": "keyring_file_add_key", 00:21:23.209 "params": { 00:21:23.209 "name": "key0", 00:21:23.209 "path": "/tmp/tmp.g1HZ5lEdaB" 00:21:23.209 } 00:21:23.209 } 00:21:23.209 ] 00:21:23.209 }, 00:21:23.209 { 00:21:23.209 "subsystem": "iobuf", 00:21:23.209 "config": [ 00:21:23.209 { 00:21:23.209 "method": "iobuf_set_options", 00:21:23.209 "params": { 00:21:23.209 "small_pool_count": 8192, 00:21:23.209 "large_pool_count": 1024, 00:21:23.209 "small_bufsize": 8192, 00:21:23.209 "large_bufsize": 135168 00:21:23.209 } 00:21:23.209 } 00:21:23.209 ] 00:21:23.209 }, 00:21:23.209 { 00:21:23.209 "subsystem": "sock", 00:21:23.209 "config": [ 00:21:23.209 { 00:21:23.209 "method": "sock_set_default_impl", 00:21:23.209 "params": { 00:21:23.209 "impl_name": "posix" 00:21:23.209 } 00:21:23.209 }, 00:21:23.209 { 00:21:23.209 "method": "sock_impl_set_options", 00:21:23.209 "params": { 00:21:23.209 "impl_name": "ssl", 00:21:23.209 "recv_buf_size": 4096, 00:21:23.209 "send_buf_size": 4096, 00:21:23.209 "enable_recv_pipe": true, 00:21:23.209 "enable_quickack": false, 00:21:23.209 "enable_placement_id": 0, 00:21:23.209 "enable_zerocopy_send_server": true, 00:21:23.209 "enable_zerocopy_send_client": false, 00:21:23.209 "zerocopy_threshold": 0, 00:21:23.209 "tls_version": 0, 00:21:23.209 "enable_ktls": false 00:21:23.209 } 00:21:23.209 }, 00:21:23.209 { 00:21:23.209 "method": "sock_impl_set_options", 00:21:23.209 "params": { 00:21:23.209 "impl_name": "posix", 00:21:23.209 "recv_buf_size": 2097152, 00:21:23.209 "send_buf_size": 2097152, 00:21:23.209 "enable_recv_pipe": true, 00:21:23.209 "enable_quickack": false, 00:21:23.209 "enable_placement_id": 0, 00:21:23.209 "enable_zerocopy_send_server": true, 00:21:23.209 "enable_zerocopy_send_client": false, 00:21:23.209 "zerocopy_threshold": 0, 00:21:23.209 "tls_version": 0, 00:21:23.209 "enable_ktls": false 00:21:23.209 } 00:21:23.209 } 00:21:23.209 ] 00:21:23.209 }, 00:21:23.209 { 00:21:23.209 "subsystem": "vmd", 00:21:23.209 "config": [] 00:21:23.209 }, 00:21:23.209 { 00:21:23.209 "subsystem": "accel", 00:21:23.209 "config": [ 00:21:23.209 { 00:21:23.209 "method": "accel_set_options", 00:21:23.209 "params": { 00:21:23.209 "small_cache_size": 128, 00:21:23.209 "large_cache_size": 16, 00:21:23.209 "task_count": 2048, 00:21:23.209 "sequence_count": 2048, 00:21:23.209 "buf_count": 2048 00:21:23.209 } 00:21:23.209 } 00:21:23.209 ] 00:21:23.209 }, 00:21:23.209 { 00:21:23.209 "subsystem": "bdev", 00:21:23.209 "config": [ 00:21:23.209 { 00:21:23.209 "method": "bdev_set_options", 00:21:23.209 "params": { 00:21:23.209 "bdev_io_pool_size": 65535, 00:21:23.209 "bdev_io_cache_size": 256, 00:21:23.209 "bdev_auto_examine": true, 00:21:23.209 "iobuf_small_cache_size": 128, 00:21:23.209 "iobuf_large_cache_size": 16 00:21:23.209 } 00:21:23.209 }, 00:21:23.209 { 00:21:23.209 "method": "bdev_raid_set_options", 00:21:23.209 "params": { 00:21:23.209 "process_window_size_kb": 1024 00:21:23.209 } 00:21:23.209 }, 00:21:23.209 { 00:21:23.209 "method": "bdev_iscsi_set_options", 00:21:23.209 "params": { 00:21:23.209 "timeout_sec": 30 00:21:23.209 } 00:21:23.209 }, 00:21:23.209 { 00:21:23.209 "method": "bdev_nvme_set_options", 00:21:23.209 "params": { 00:21:23.209 "action_on_timeout": "none", 00:21:23.209 "timeout_us": 0, 00:21:23.209 "timeout_admin_us": 0, 00:21:23.209 "keep_alive_timeout_ms": 10000, 00:21:23.209 "arbitration_burst": 0, 00:21:23.210 "low_priority_weight": 0, 00:21:23.210 "medium_priority_weight": 0, 00:21:23.210 "high_priority_weight": 0, 00:21:23.210 "nvme_adminq_poll_period_us": 10000, 00:21:23.210 "nvme_ioq_poll_period_us": 0, 00:21:23.210 "io_queue_requests": 0, 00:21:23.210 "delay_cmd_submit": true, 00:21:23.210 "transport_retry_count": 4, 00:21:23.210 "bdev_retry_count": 3, 00:21:23.210 "transport_ack_timeout": 0, 00:21:23.210 "ctrlr_loss_timeout_sec": 0, 00:21:23.210 "reconnect_delay_sec": 0, 00:21:23.210 "fast_io_fail_timeout_sec": 0, 00:21:23.210 "disable_auto_failback": false, 00:21:23.210 "generate_uuids": false, 00:21:23.210 "transport_tos": 0, 00:21:23.210 "nvme_error_stat": false, 00:21:23.210 "rdma_srq_size": 0, 00:21:23.210 "io_path_stat": false, 00:21:23.210 "allow_accel_sequence": false, 00:21:23.210 "rdma_max_cq_size": 0, 00:21:23.210 "rdma_cm_event_timeout_ms": 0, 00:21:23.210 "dhchap_digests": [ 00:21:23.210 "sha256", 00:21:23.210 "sha384", 00:21:23.210 "sha512" 00:21:23.210 ], 00:21:23.210 "dhchap_dhgroups": [ 00:21:23.210 "null", 00:21:23.210 "ffdhe2048", 00:21:23.210 "ffdhe3072", 00:21:23.210 "ffdhe4096", 00:21:23.210 "ffdhe6144", 00:21:23.210 "ffdhe8192" 00:21:23.210 ] 00:21:23.210 } 00:21:23.210 }, 00:21:23.210 { 00:21:23.210 "method": "bdev_nvme_set_hotplug", 00:21:23.210 "params": { 00:21:23.210 "period_us": 100000, 00:21:23.210 "enable": false 00:21:23.210 } 00:21:23.210 }, 00:21:23.210 { 00:21:23.210 "method": "bdev_malloc_create", 00:21:23.210 "params": { 00:21:23.210 "name": "malloc0", 00:21:23.210 "num_blocks": 8192, 00:21:23.210 "block_size": 4096, 00:21:23.210 "physical_block_size": 4096, 00:21:23.210 "uuid": "4ec3693d-ee21-4547-b53a-671817a0b6dc", 00:21:23.210 "optimal_io_boundary": 0 00:21:23.210 } 00:21:23.210 }, 00:21:23.210 { 00:21:23.210 "method": "bdev_wait_for_examine" 00:21:23.210 } 00:21:23.210 ] 00:21:23.210 }, 00:21:23.210 { 00:21:23.210 "subsystem": "nbd", 00:21:23.210 "config": [] 00:21:23.210 }, 00:21:23.210 { 00:21:23.210 "subsystem": "scheduler", 00:21:23.210 "config": [ 00:21:23.210 { 00:21:23.210 "method": "framework_set_scheduler", 00:21:23.210 "params": { 00:21:23.210 "name": "static" 00:21:23.210 } 00:21:23.210 } 00:21:23.210 ] 00:21:23.210 }, 00:21:23.210 { 00:21:23.210 "subsystem": "nvmf", 00:21:23.210 "config": [ 00:21:23.210 { 00:21:23.210 "method": "nvmf_set_config", 00:21:23.210 "params": { 00:21:23.210 "discovery_filter": "match_any", 00:21:23.210 "admin_cmd_passthru": { 00:21:23.210 "identify_ctrlr": false 00:21:23.210 } 00:21:23.210 } 00:21:23.210 }, 00:21:23.210 { 00:21:23.210 "method": "nvmf_set_max_subsystems", 00:21:23.210 "params": { 00:21:23.210 "max_subsystems": 1024 00:21:23.210 } 00:21:23.210 }, 00:21:23.210 { 00:21:23.210 "method": "nvmf_set_crdt", 00:21:23.210 "params": { 00:21:23.210 "crdt1": 0, 00:21:23.210 "crdt2": 0, 00:21:23.210 "crdt3": 0 00:21:23.210 } 00:21:23.210 }, 00:21:23.210 { 00:21:23.210 "method": "nvmf_create_transport", 00:21:23.210 "params": { 00:21:23.210 "trtype": "TCP", 00:21:23.210 "max_queue_depth": 128, 00:21:23.210 "max_io_qpairs_per_ctrlr": 127, 00:21:23.210 "in_capsule_data_size": 4096, 00:21:23.210 "max_io_size": 131072, 00:21:23.210 "io_unit_size": 131072, 00:21:23.210 "max_aq_depth": 128, 00:21:23.210 "num_shared_buffers": 511, 00:21:23.210 "buf_cache_size": 4294967295, 00:21:23.210 "dif_insert_or_strip": false, 00:21:23.210 "zcopy": false, 00:21:23.210 "c2h_success": false, 00:21:23.210 "sock_priority": 0, 00:21:23.210 "abort_timeout_sec": 1, 00:21:23.210 "ack_timeout": 0, 00:21:23.210 "data_wr_pool_size": 0 00:21:23.210 } 00:21:23.210 }, 00:21:23.210 { 00:21:23.210 "method": "nvmf_create_subsystem", 00:21:23.210 "params": { 00:21:23.210 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.210 "allow_any_host": false, 00:21:23.210 "serial_number": "00000000000000000000", 00:21:23.210 "model_number": "SPDK bdev Controller", 00:21:23.210 "max_namespaces": 32, 00:21:23.210 "min_cntlid": 1, 00:21:23.210 "max_cntlid": 65519, 00:21:23.210 "ana_reporting": false 00:21:23.210 } 00:21:23.210 }, 00:21:23.210 { 00:21:23.210 "method": "nvmf_subsystem_add_host", 00:21:23.210 "params": { 00:21:23.210 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.210 "host": "nqn.2016-06.io.spdk:host1", 00:21:23.210 "psk": "key0" 00:21:23.210 } 00:21:23.210 }, 00:21:23.210 { 00:21:23.210 "method": "nvmf_subsystem_add_ns", 00:21:23.210 "params": { 00:21:23.210 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.210 "namespace": { 00:21:23.210 "nsid": 1, 00:21:23.210 "bdev_name": "malloc0", 00:21:23.210 "nguid": "4EC3693DEE214547B53A671817A0B6DC", 00:21:23.210 "uuid": "4ec3693d-ee21-4547-b53a-671817a0b6dc", 00:21:23.210 "no_auto_visible": false 00:21:23.210 } 00:21:23.210 } 00:21:23.210 }, 00:21:23.210 { 00:21:23.210 "method": "nvmf_subsystem_add_listener", 00:21:23.210 "params": { 00:21:23.210 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.210 "listen_address": { 00:21:23.210 "trtype": "TCP", 00:21:23.210 "adrfam": "IPv4", 00:21:23.210 "traddr": "10.0.0.2", 00:21:23.210 "trsvcid": "4420" 00:21:23.210 }, 00:21:23.210 "secure_channel": false, 00:21:23.210 "sock_impl": "ssl" 00:21:23.210 } 00:21:23.210 } 00:21:23.210 ] 00:21:23.210 } 00:21:23.210 ] 00:21:23.210 }' 00:21:23.210 01:21:45 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:23.210 01:21:45 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:21:23.210 "subsystems": [ 00:21:23.210 { 00:21:23.210 "subsystem": "keyring", 00:21:23.210 "config": [ 00:21:23.210 { 00:21:23.210 "method": "keyring_file_add_key", 00:21:23.210 "params": { 00:21:23.210 "name": "key0", 00:21:23.210 "path": "/tmp/tmp.g1HZ5lEdaB" 00:21:23.210 } 00:21:23.210 } 00:21:23.210 ] 00:21:23.210 }, 00:21:23.210 { 00:21:23.210 "subsystem": "iobuf", 00:21:23.210 "config": [ 00:21:23.210 { 00:21:23.210 "method": "iobuf_set_options", 00:21:23.210 "params": { 00:21:23.210 "small_pool_count": 8192, 00:21:23.210 "large_pool_count": 1024, 00:21:23.210 "small_bufsize": 8192, 00:21:23.210 "large_bufsize": 135168 00:21:23.210 } 00:21:23.210 } 00:21:23.210 ] 00:21:23.210 }, 00:21:23.210 { 00:21:23.210 "subsystem": "sock", 00:21:23.210 "config": [ 00:21:23.210 { 00:21:23.210 "method": "sock_set_default_impl", 00:21:23.210 "params": { 00:21:23.210 "impl_name": "posix" 00:21:23.210 } 00:21:23.210 }, 00:21:23.210 { 00:21:23.210 "method": "sock_impl_set_options", 00:21:23.210 "params": { 00:21:23.210 "impl_name": "ssl", 00:21:23.210 "recv_buf_size": 4096, 00:21:23.210 "send_buf_size": 4096, 00:21:23.210 "enable_recv_pipe": true, 00:21:23.210 "enable_quickack": false, 00:21:23.210 "enable_placement_id": 0, 00:21:23.210 "enable_zerocopy_send_server": true, 00:21:23.210 "enable_zerocopy_send_client": false, 00:21:23.210 "zerocopy_threshold": 0, 00:21:23.210 "tls_version": 0, 00:21:23.210 "enable_ktls": false 00:21:23.210 } 00:21:23.210 }, 00:21:23.210 { 00:21:23.210 "method": "sock_impl_set_options", 00:21:23.210 "params": { 00:21:23.210 "impl_name": "posix", 00:21:23.210 "recv_buf_size": 2097152, 00:21:23.210 "send_buf_size": 2097152, 00:21:23.210 "enable_recv_pipe": true, 00:21:23.210 "enable_quickack": false, 00:21:23.210 "enable_placement_id": 0, 00:21:23.210 "enable_zerocopy_send_server": true, 00:21:23.210 "enable_zerocopy_send_client": false, 00:21:23.210 "zerocopy_threshold": 0, 00:21:23.210 "tls_version": 0, 00:21:23.210 "enable_ktls": false 00:21:23.210 } 00:21:23.210 } 00:21:23.210 ] 00:21:23.210 }, 00:21:23.210 { 00:21:23.210 "subsystem": "vmd", 00:21:23.210 "config": [] 00:21:23.210 }, 00:21:23.210 { 00:21:23.210 "subsystem": "accel", 00:21:23.210 "config": [ 00:21:23.210 { 00:21:23.210 "method": "accel_set_options", 00:21:23.210 "params": { 00:21:23.210 "small_cache_size": 128, 00:21:23.210 "large_cache_size": 16, 00:21:23.210 "task_count": 2048, 00:21:23.210 "sequence_count": 2048, 00:21:23.210 "buf_count": 2048 00:21:23.210 } 00:21:23.210 } 00:21:23.210 ] 00:21:23.210 }, 00:21:23.210 { 00:21:23.211 "subsystem": "bdev", 00:21:23.211 "config": [ 00:21:23.211 { 00:21:23.211 "method": "bdev_set_options", 00:21:23.211 "params": { 00:21:23.211 "bdev_io_pool_size": 65535, 00:21:23.211 "bdev_io_cache_size": 256, 00:21:23.211 "bdev_auto_examine": true, 00:21:23.211 "iobuf_small_cache_size": 128, 00:21:23.211 "iobuf_large_cache_size": 16 00:21:23.211 } 00:21:23.211 }, 00:21:23.211 { 00:21:23.211 "method": "bdev_raid_set_options", 00:21:23.211 "params": { 00:21:23.211 "process_window_size_kb": 1024 00:21:23.211 } 00:21:23.211 }, 00:21:23.211 { 00:21:23.211 "method": "bdev_iscsi_set_options", 00:21:23.211 "params": { 00:21:23.211 "timeout_sec": 30 00:21:23.211 } 00:21:23.211 }, 00:21:23.211 { 00:21:23.211 "method": "bdev_nvme_set_options", 00:21:23.211 "params": { 00:21:23.211 "action_on_timeout": "none", 00:21:23.211 "timeout_us": 0, 00:21:23.211 "timeout_admin_us": 0, 00:21:23.211 "keep_alive_timeout_ms": 10000, 00:21:23.211 "arbitration_burst": 0, 00:21:23.211 "low_priority_weight": 0, 00:21:23.211 "medium_priority_weight": 0, 00:21:23.211 "high_priority_weight": 0, 00:21:23.211 "nvme_adminq_poll_period_us": 10000, 00:21:23.211 "nvme_ioq_poll_period_us": 0, 00:21:23.211 "io_queue_requests": 512, 00:21:23.211 "delay_cmd_submit": true, 00:21:23.211 "transport_retry_count": 4, 00:21:23.211 "bdev_retry_count": 3, 00:21:23.211 "transport_ack_timeout": 0, 00:21:23.211 "ctrlr_loss_timeout_sec": 0, 00:21:23.211 "reconnect_delay_sec": 0, 00:21:23.211 "fast_io_fail_timeout_sec": 0, 00:21:23.211 "disable_auto_failback": false, 00:21:23.211 "generate_uuids": false, 00:21:23.211 "transport_tos": 0, 00:21:23.211 "nvme_error_stat": false, 00:21:23.211 "rdma_srq_size": 0, 00:21:23.211 "io_path_stat": false, 00:21:23.211 "allow_accel_sequence": false, 00:21:23.211 "rdma_max_cq_size": 0, 00:21:23.211 "rdma_cm_event_timeout_ms": 0, 00:21:23.211 "dhchap_digests": [ 00:21:23.211 "sha256", 00:21:23.211 "sha384", 00:21:23.211 "sha512" 00:21:23.211 ], 00:21:23.211 "dhchap_dhgroups": [ 00:21:23.211 "null", 00:21:23.211 "ffdhe2048", 00:21:23.211 "ffdhe3072", 00:21:23.211 "ffdhe4096", 00:21:23.211 "ffdhe6144", 00:21:23.211 "ffdhe8192" 00:21:23.211 ] 00:21:23.211 } 00:21:23.211 }, 00:21:23.211 { 00:21:23.211 "method": "bdev_nvme_attach_controller", 00:21:23.211 "params": { 00:21:23.211 "name": "nvme0", 00:21:23.211 "trtype": "TCP", 00:21:23.211 "adrfam": "IPv4", 00:21:23.211 "traddr": "10.0.0.2", 00:21:23.211 "trsvcid": "4420", 00:21:23.211 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.211 "prchk_reftag": false, 00:21:23.211 "prchk_guard": false, 00:21:23.211 "ctrlr_loss_timeout_sec": 0, 00:21:23.211 "reconnect_delay_sec": 0, 00:21:23.211 "fast_io_fail_timeout_sec": 0, 00:21:23.211 "psk": "key0", 00:21:23.211 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:23.211 "hdgst": false, 00:21:23.211 "ddgst": false 00:21:23.211 } 00:21:23.211 }, 00:21:23.211 { 00:21:23.211 "method": "bdev_nvme_set_hotplug", 00:21:23.211 "params": { 00:21:23.211 "period_us": 100000, 00:21:23.211 "enable": false 00:21:23.211 } 00:21:23.211 }, 00:21:23.211 { 00:21:23.211 "method": "bdev_enable_histogram", 00:21:23.211 "params": { 00:21:23.211 "name": "nvme0n1", 00:21:23.211 "enable": true 00:21:23.211 } 00:21:23.211 }, 00:21:23.211 { 00:21:23.211 "method": "bdev_wait_for_examine" 00:21:23.211 } 00:21:23.211 ] 00:21:23.211 }, 00:21:23.211 { 00:21:23.211 "subsystem": "nbd", 00:21:23.211 "config": [] 00:21:23.211 } 00:21:23.211 ] 00:21:23.211 }' 00:21:23.211 01:21:45 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 936444 00:21:23.211 01:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 936444 ']' 00:21:23.211 01:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 936444 00:21:23.211 01:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:23.211 01:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:23.211 01:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 936444 00:21:23.472 01:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:23.472 01:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:23.472 01:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 936444' 00:21:23.472 killing process with pid 936444 00:21:23.472 01:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 936444 00:21:23.472 Received shutdown signal, test time was about 1.000000 seconds 00:21:23.472 00:21:23.472 Latency(us) 00:21:23.472 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.472 =================================================================================================================== 00:21:23.472 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:23.472 01:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 936444 00:21:23.472 01:21:45 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 936202 00:21:23.472 01:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 936202 ']' 00:21:23.472 01:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 936202 00:21:23.472 01:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:23.472 01:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:23.472 01:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 936202 00:21:23.472 01:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:23.472 01:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:23.472 01:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 936202' 00:21:23.472 killing process with pid 936202 00:21:23.472 01:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 936202 00:21:23.472 01:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 936202 00:21:23.732 01:21:46 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:21:23.732 01:21:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:23.732 01:21:46 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:21:23.732 "subsystems": [ 00:21:23.732 { 00:21:23.732 "subsystem": "keyring", 00:21:23.732 "config": [ 00:21:23.732 { 00:21:23.732 "method": "keyring_file_add_key", 00:21:23.732 "params": { 00:21:23.732 "name": "key0", 00:21:23.732 "path": "/tmp/tmp.g1HZ5lEdaB" 00:21:23.732 } 00:21:23.732 } 00:21:23.732 ] 00:21:23.732 }, 00:21:23.732 { 00:21:23.732 "subsystem": "iobuf", 00:21:23.732 "config": [ 00:21:23.732 { 00:21:23.732 "method": "iobuf_set_options", 00:21:23.732 "params": { 00:21:23.732 "small_pool_count": 8192, 00:21:23.732 "large_pool_count": 1024, 00:21:23.733 "small_bufsize": 8192, 00:21:23.733 "large_bufsize": 135168 00:21:23.733 } 00:21:23.733 } 00:21:23.733 ] 00:21:23.733 }, 00:21:23.733 { 00:21:23.733 "subsystem": "sock", 00:21:23.733 "config": [ 00:21:23.733 { 00:21:23.733 "method": "sock_set_default_impl", 00:21:23.733 "params": { 00:21:23.733 "impl_name": "posix" 00:21:23.733 } 00:21:23.733 }, 00:21:23.733 { 00:21:23.733 "method": "sock_impl_set_options", 00:21:23.733 "params": { 00:21:23.733 "impl_name": "ssl", 00:21:23.733 "recv_buf_size": 4096, 00:21:23.733 "send_buf_size": 4096, 00:21:23.733 "enable_recv_pipe": true, 00:21:23.733 "enable_quickack": false, 00:21:23.733 "enable_placement_id": 0, 00:21:23.733 "enable_zerocopy_send_server": true, 00:21:23.733 "enable_zerocopy_send_client": false, 00:21:23.733 "zerocopy_threshold": 0, 00:21:23.733 "tls_version": 0, 00:21:23.733 "enable_ktls": false 00:21:23.733 } 00:21:23.733 }, 00:21:23.733 { 00:21:23.733 "method": "sock_impl_set_options", 00:21:23.733 "params": { 00:21:23.733 "impl_name": "posix", 00:21:23.733 "recv_buf_size": 2097152, 00:21:23.733 "send_buf_size": 2097152, 00:21:23.733 "enable_recv_pipe": true, 00:21:23.733 "enable_quickack": false, 00:21:23.733 "enable_placement_id": 0, 00:21:23.733 "enable_zerocopy_send_server": true, 00:21:23.733 "enable_zerocopy_send_client": false, 00:21:23.733 "zerocopy_threshold": 0, 00:21:23.733 "tls_version": 0, 00:21:23.733 "enable_ktls": false 00:21:23.733 } 00:21:23.733 } 00:21:23.733 ] 00:21:23.733 }, 00:21:23.733 { 00:21:23.733 "subsystem": "vmd", 00:21:23.733 "config": [] 00:21:23.733 }, 00:21:23.733 { 00:21:23.733 "subsystem": "accel", 00:21:23.733 "config": [ 00:21:23.733 { 00:21:23.733 "method": "accel_set_options", 00:21:23.733 "params": { 00:21:23.733 "small_cache_size": 128, 00:21:23.733 "large_cache_size": 16, 00:21:23.733 "task_count": 2048, 00:21:23.733 "sequence_count": 2048, 00:21:23.733 "buf_count": 2048 00:21:23.733 } 00:21:23.733 } 00:21:23.733 ] 00:21:23.733 }, 00:21:23.733 { 00:21:23.733 "subsystem": "bdev", 00:21:23.733 "config": [ 00:21:23.733 { 00:21:23.733 "method": "bdev_set_options", 00:21:23.733 "params": { 00:21:23.733 "bdev_io_pool_size": 65535, 00:21:23.733 "bdev_io_cache_size": 256, 00:21:23.733 "bdev_auto_examine": true, 00:21:23.733 "iobuf_small_cache_size": 128, 00:21:23.733 "iobuf_large_cache_size": 16 00:21:23.733 } 00:21:23.733 }, 00:21:23.733 { 00:21:23.733 "method": "bdev_raid_set_options", 00:21:23.733 "params": { 00:21:23.733 "process_window_size_kb": 1024 00:21:23.733 } 00:21:23.733 }, 00:21:23.733 { 00:21:23.733 "method": "bdev_iscsi_set_options", 00:21:23.733 "params": { 00:21:23.733 "timeout_sec": 30 00:21:23.733 } 00:21:23.733 }, 00:21:23.733 { 00:21:23.733 "method": "bdev_nvme_set_options", 00:21:23.733 "params": { 00:21:23.733 "action_on_timeout": "none", 00:21:23.733 "timeout_us": 0, 00:21:23.733 "timeout_admin_us": 0, 00:21:23.733 "keep_alive_timeout_ms": 10000, 00:21:23.733 "arbitration_burst": 0, 00:21:23.733 "low_priority_weight": 0, 00:21:23.733 "medium_priority_weight": 0, 00:21:23.733 "high_priority_weight": 0, 00:21:23.733 "nvme_adminq_poll_period_us": 10000, 00:21:23.733 "nvme_ioq_poll_period_us": 0, 00:21:23.733 "io_queue_requests": 0, 00:21:23.733 "delay_cmd_submit": true, 00:21:23.733 "transport_retry_count": 4, 00:21:23.733 "bdev_retry_count": 3, 00:21:23.733 "transport_ack_timeout": 0, 00:21:23.733 "ctrlr_loss_timeout_sec": 0, 00:21:23.733 "reconnect_delay_sec": 0, 00:21:23.733 "fast_io_fail_timeout_sec": 0, 00:21:23.733 "disable_auto_failback": false, 00:21:23.733 "generate_uuids": false, 00:21:23.733 "transport_tos": 0, 00:21:23.733 "nvme_error_stat": false, 00:21:23.733 "rdma_srq_size": 0, 00:21:23.733 "io_path_stat": false, 00:21:23.733 "allow_accel_sequence": false, 00:21:23.733 "rdma_max_cq_size": 0, 00:21:23.733 "rdma_cm_event_timeout_ms": 0, 00:21:23.733 "dhchap_digests": [ 00:21:23.733 "sha256", 00:21:23.733 "sha384", 00:21:23.733 "sha512" 00:21:23.733 ], 00:21:23.733 "dhchap_dhgroups": [ 00:21:23.733 "null", 00:21:23.733 "ffdhe2048", 00:21:23.733 "ffdhe3072", 00:21:23.733 "ffdhe4096", 00:21:23.733 "ffdhe6144", 00:21:23.733 "ffdhe8192" 00:21:23.733 ] 00:21:23.733 } 00:21:23.733 }, 00:21:23.733 { 00:21:23.733 "method": "bdev_nvme_set_hotplug", 00:21:23.733 "params": { 00:21:23.733 "period_us": 100000, 00:21:23.733 "enable": false 00:21:23.733 } 00:21:23.733 }, 00:21:23.733 { 00:21:23.733 "method": "bdev_malloc_create", 00:21:23.733 "params": { 00:21:23.733 "name": "malloc0", 00:21:23.733 "num_blocks": 8192, 00:21:23.733 "block_size": 4096, 00:21:23.733 "physical_block_size": 4096, 00:21:23.733 "uuid": "4ec3693d-ee21-4547-b53a-671817a0b6dc", 00:21:23.733 "optimal_io_boundary": 0 00:21:23.733 } 00:21:23.733 }, 00:21:23.733 { 00:21:23.733 "method": "bdev_wait_for_examine" 00:21:23.733 } 00:21:23.733 ] 00:21:23.733 }, 00:21:23.733 { 00:21:23.733 "subsystem": "nbd", 00:21:23.733 "config": [] 00:21:23.733 }, 00:21:23.733 { 00:21:23.733 "subsystem": "scheduler", 00:21:23.733 "config": [ 00:21:23.733 { 00:21:23.733 "method": "framework_set_scheduler", 00:21:23.733 "params": { 00:21:23.733 "name": "static" 00:21:23.733 } 00:21:23.733 } 00:21:23.733 ] 00:21:23.733 }, 00:21:23.733 { 00:21:23.733 "subsystem": "nvmf", 00:21:23.733 "config": [ 00:21:23.733 { 00:21:23.733 "method": "nvmf_set_config", 00:21:23.733 "params": { 00:21:23.733 "discovery_filter": "match_any", 00:21:23.733 "admin_cmd_passthru": { 00:21:23.733 "identify_ctrlr": false 00:21:23.733 } 00:21:23.733 } 00:21:23.733 }, 00:21:23.733 { 00:21:23.733 "method": "nvmf_set_max_subsystems", 00:21:23.733 "params": { 00:21:23.733 "max_subsystems": 1024 00:21:23.733 } 00:21:23.733 }, 00:21:23.733 { 00:21:23.733 "method": "nvmf_set_crdt", 00:21:23.733 "params": { 00:21:23.733 "crdt1": 0, 00:21:23.733 "crdt2": 0, 00:21:23.733 "crdt3": 0 00:21:23.733 } 00:21:23.733 }, 00:21:23.733 { 00:21:23.733 "method": "nvmf_create_transport", 00:21:23.733 "params": { 00:21:23.733 "trtype": "TCP", 00:21:23.733 "max_queue_depth": 128, 00:21:23.733 "max_io_qpairs_per_ctrlr": 127, 00:21:23.733 "in_capsule_data_size": 4096, 00:21:23.733 "max_io_size": 131072, 00:21:23.733 "io_unit_size": 131072, 00:21:23.733 "max_aq_depth": 128, 00:21:23.733 "num_shared_buffers": 511, 00:21:23.733 "buf_cache_size": 4294967295, 00:21:23.733 "dif_insert_or_strip": false, 00:21:23.733 "zcopy": false, 00:21:23.733 "c2h_success": false, 00:21:23.733 "sock_priority": 0, 00:21:23.733 "abort_timeout_sec": 1, 00:21:23.733 "ack_timeout": 0, 00:21:23.733 "data_wr_pool_size": 0 00:21:23.733 } 00:21:23.733 }, 00:21:23.733 { 00:21:23.733 "method": "nvmf_create_subsystem", 00:21:23.733 "params": { 00:21:23.733 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.733 "allow_any_host": false, 00:21:23.733 "serial_number": "00000000000000000000", 00:21:23.733 "model_number": "SPDK bdev Controller", 00:21:23.733 "max_namespaces": 32, 00:21:23.733 "min_cntlid": 1, 00:21:23.733 "max_cntlid": 65519, 00:21:23.733 "ana_reporting": false 00:21:23.733 } 00:21:23.733 }, 00:21:23.733 { 00:21:23.733 "method": "nvmf_subsystem_add_host", 00:21:23.733 "params": { 00:21:23.733 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.733 "host": "nqn.2016-06.io.spdk:host1", 00:21:23.733 "psk": "key0" 00:21:23.733 } 00:21:23.733 }, 00:21:23.733 { 00:21:23.733 "method": "nvmf_subsystem_add_ns", 00:21:23.733 "params": { 00:21:23.733 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.733 "namespace": { 00:21:23.733 "nsid": 1, 00:21:23.733 "bdev_name": "malloc0", 00:21:23.733 "nguid": "4EC3693DEE214547B53A671817A0B6DC", 00:21:23.733 "uuid": "4ec3693d-ee21-4547-b53a-671817a0b6dc", 00:21:23.733 "no_auto_visible": false 00:21:23.733 } 00:21:23.733 } 00:21:23.733 }, 00:21:23.733 { 00:21:23.733 "method": "nvmf_subsystem_add_listener", 00:21:23.733 "params": { 00:21:23.733 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.733 "listen_address": { 00:21:23.733 "trtype": "TCP", 00:21:23.733 "adrfam": "IPv4", 00:21:23.733 "traddr": "10.0.0.2", 00:21:23.733 "trsvcid": "4420" 00:21:23.733 }, 00:21:23.733 "secure_channel": false, 00:21:23.733 "sock_impl": "ssl" 00:21:23.733 } 00:21:23.733 } 00:21:23.733 ] 00:21:23.733 } 00:21:23.733 ] 00:21:23.733 }' 00:21:23.733 01:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:23.733 01:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:23.733 01:21:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=936928 00:21:23.734 01:21:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:23.734 01:21:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 936928 00:21:23.734 01:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 936928 ']' 00:21:23.734 01:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.734 01:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:23.734 01:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.734 01:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:23.734 01:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:23.734 [2024-07-25 01:21:46.183996] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:21:23.734 [2024-07-25 01:21:46.184039] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:23.734 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.993 [2024-07-25 01:21:46.240386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.993 [2024-07-25 01:21:46.318917] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:23.993 [2024-07-25 01:21:46.318954] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:23.994 [2024-07-25 01:21:46.318961] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:23.994 [2024-07-25 01:21:46.318967] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:23.994 [2024-07-25 01:21:46.318972] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:23.994 [2024-07-25 01:21:46.319019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.253 [2024-07-25 01:21:46.531296] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:24.253 [2024-07-25 01:21:46.569069] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:24.253 [2024-07-25 01:21:46.569235] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:24.512 01:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:24.512 01:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:24.512 01:21:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:24.512 01:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:24.512 01:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:24.772 01:21:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:24.772 01:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=937171 00:21:24.772 01:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 937171 /var/tmp/bdevperf.sock 00:21:24.772 01:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 937171 ']' 00:21:24.772 01:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:24.772 01:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:24.772 01:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:24.772 01:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:24.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:24.772 01:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:21:24.772 "subsystems": [ 00:21:24.772 { 00:21:24.772 "subsystem": "keyring", 00:21:24.772 "config": [ 00:21:24.772 { 00:21:24.772 "method": "keyring_file_add_key", 00:21:24.772 "params": { 00:21:24.772 "name": "key0", 00:21:24.772 "path": "/tmp/tmp.g1HZ5lEdaB" 00:21:24.772 } 00:21:24.772 } 00:21:24.772 ] 00:21:24.772 }, 00:21:24.772 { 00:21:24.772 "subsystem": "iobuf", 00:21:24.772 "config": [ 00:21:24.772 { 00:21:24.772 "method": "iobuf_set_options", 00:21:24.772 "params": { 00:21:24.772 "small_pool_count": 8192, 00:21:24.772 "large_pool_count": 1024, 00:21:24.772 "small_bufsize": 8192, 00:21:24.772 "large_bufsize": 135168 00:21:24.772 } 00:21:24.772 } 00:21:24.772 ] 00:21:24.772 }, 00:21:24.772 { 00:21:24.772 "subsystem": "sock", 00:21:24.772 "config": [ 00:21:24.772 { 00:21:24.772 "method": "sock_set_default_impl", 00:21:24.772 "params": { 00:21:24.772 "impl_name": "posix" 00:21:24.772 } 00:21:24.772 }, 00:21:24.772 { 00:21:24.772 "method": "sock_impl_set_options", 00:21:24.772 "params": { 00:21:24.772 "impl_name": "ssl", 00:21:24.772 "recv_buf_size": 4096, 00:21:24.772 "send_buf_size": 4096, 00:21:24.772 "enable_recv_pipe": true, 00:21:24.772 "enable_quickack": false, 00:21:24.772 "enable_placement_id": 0, 00:21:24.772 "enable_zerocopy_send_server": true, 00:21:24.772 "enable_zerocopy_send_client": false, 00:21:24.772 "zerocopy_threshold": 0, 00:21:24.772 "tls_version": 0, 00:21:24.772 "enable_ktls": false 00:21:24.772 } 00:21:24.772 }, 00:21:24.772 { 00:21:24.772 "method": "sock_impl_set_options", 00:21:24.772 "params": { 00:21:24.772 "impl_name": "posix", 00:21:24.772 "recv_buf_size": 2097152, 00:21:24.772 "send_buf_size": 2097152, 00:21:24.772 "enable_recv_pipe": true, 00:21:24.772 "enable_quickack": false, 00:21:24.772 "enable_placement_id": 0, 00:21:24.772 "enable_zerocopy_send_server": true, 00:21:24.772 "enable_zerocopy_send_client": false, 00:21:24.772 "zerocopy_threshold": 0, 00:21:24.772 "tls_version": 0, 00:21:24.772 "enable_ktls": false 00:21:24.772 } 00:21:24.772 } 00:21:24.772 ] 00:21:24.772 }, 00:21:24.772 { 00:21:24.772 "subsystem": "vmd", 00:21:24.772 "config": [] 00:21:24.772 }, 00:21:24.772 { 00:21:24.772 "subsystem": "accel", 00:21:24.772 "config": [ 00:21:24.772 { 00:21:24.772 "method": "accel_set_options", 00:21:24.772 "params": { 00:21:24.772 "small_cache_size": 128, 00:21:24.772 "large_cache_size": 16, 00:21:24.772 "task_count": 2048, 00:21:24.772 "sequence_count": 2048, 00:21:24.772 "buf_count": 2048 00:21:24.772 } 00:21:24.772 } 00:21:24.772 ] 00:21:24.772 }, 00:21:24.772 { 00:21:24.772 "subsystem": "bdev", 00:21:24.772 "config": [ 00:21:24.772 { 00:21:24.773 "method": "bdev_set_options", 00:21:24.773 "params": { 00:21:24.773 "bdev_io_pool_size": 65535, 00:21:24.773 "bdev_io_cache_size": 256, 00:21:24.773 "bdev_auto_examine": true, 00:21:24.773 "iobuf_small_cache_size": 128, 00:21:24.773 "iobuf_large_cache_size": 16 00:21:24.773 } 00:21:24.773 }, 00:21:24.773 { 00:21:24.773 "method": "bdev_raid_set_options", 00:21:24.773 "params": { 00:21:24.773 "process_window_size_kb": 1024 00:21:24.773 } 00:21:24.773 }, 00:21:24.773 { 00:21:24.773 "method": "bdev_iscsi_set_options", 00:21:24.773 "params": { 00:21:24.773 "timeout_sec": 30 00:21:24.773 } 00:21:24.773 }, 00:21:24.773 { 00:21:24.773 "method": "bdev_nvme_set_options", 00:21:24.773 "params": { 00:21:24.773 "action_on_timeout": "none", 00:21:24.773 "timeout_us": 0, 00:21:24.773 "timeout_admin_us": 0, 00:21:24.773 "keep_alive_timeout_ms": 10000, 00:21:24.773 "arbitration_burst": 0, 00:21:24.773 "low_priority_weight": 0, 00:21:24.773 "medium_priority_weight": 0, 00:21:24.773 "high_priority_weight": 0, 00:21:24.773 "nvme_adminq_poll_period_us": 10000, 00:21:24.773 "nvme_ioq_poll_period_us": 0, 00:21:24.773 "io_queue_requests": 512, 00:21:24.773 "delay_cmd_submit": true, 00:21:24.773 "transport_retry_count": 4, 00:21:24.773 "bdev_retry_count": 3, 00:21:24.773 "transport_ack_timeout": 0, 00:21:24.773 "ctrlr_loss_timeout_sec": 0, 00:21:24.773 "reconnect_delay_sec": 0, 00:21:24.773 "fast_io_fail_timeout_sec": 0, 00:21:24.773 "disable_auto_failback": false, 00:21:24.773 "generate_uuids": false, 00:21:24.773 "transport_tos": 0, 00:21:24.773 "nvme_error_stat": false, 00:21:24.773 "rdma_srq_size": 0, 00:21:24.773 "io_path_stat": false, 00:21:24.773 "allow_accel_sequence": false, 00:21:24.773 "rdma_max_cq_size": 0, 00:21:24.773 "rdma_cm_event_timeout_ms": 0, 00:21:24.773 "dhchap_digests": [ 00:21:24.773 "sha256", 00:21:24.773 "sha384", 00:21:24.773 "sha512" 00:21:24.773 ], 00:21:24.773 "dhchap_dhgroups": [ 00:21:24.773 "null", 00:21:24.773 "ffdhe2048", 00:21:24.773 "ffdhe3072", 00:21:24.773 "ffdhe4096", 00:21:24.773 "ffdhe6144", 00:21:24.773 "ffdhe8192" 00:21:24.773 ] 00:21:24.773 } 00:21:24.773 }, 00:21:24.773 { 00:21:24.773 "method": "bdev_nvme_attach_controller", 00:21:24.773 "params": { 00:21:24.773 "name": "nvme0", 00:21:24.773 "trtype": "TCP", 00:21:24.773 "adrfam": "IPv4", 00:21:24.773 "traddr": "10.0.0.2", 00:21:24.773 "trsvcid": "4420", 00:21:24.773 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:24.773 "prchk_reftag": false, 00:21:24.773 "prchk_guard": false, 00:21:24.773 "ctrlr_loss_timeout_sec": 0, 00:21:24.773 "reconnect_delay_sec": 0, 00:21:24.773 "fast_io_fail_timeout_sec": 0, 00:21:24.773 "psk": "key0", 00:21:24.773 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:24.773 "hdgst": false, 00:21:24.773 "ddgst": false 00:21:24.773 } 00:21:24.773 }, 00:21:24.773 { 00:21:24.773 "method": "bdev_nvme_set_hotplug", 00:21:24.773 "params": { 00:21:24.773 "period_us": 100000, 00:21:24.773 "enable": false 00:21:24.773 } 00:21:24.773 }, 00:21:24.773 { 00:21:24.773 "method": "bdev_enable_histogram", 00:21:24.773 "params": { 00:21:24.773 "name": "nvme0n1", 00:21:24.773 "enable": true 00:21:24.773 } 00:21:24.773 }, 00:21:24.773 { 00:21:24.773 "method": "bdev_wait_for_examine" 00:21:24.773 } 00:21:24.773 ] 00:21:24.773 }, 00:21:24.773 { 00:21:24.773 "subsystem": "nbd", 00:21:24.773 "config": [] 00:21:24.773 } 00:21:24.773 ] 00:21:24.773 }' 00:21:24.773 01:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:24.773 01:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:24.773 [2024-07-25 01:21:47.069423] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:21:24.773 [2024-07-25 01:21:47.069467] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid937171 ] 00:21:24.773 EAL: No free 2048 kB hugepages reported on node 1 00:21:24.773 [2024-07-25 01:21:47.123942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.773 [2024-07-25 01:21:47.197213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:25.033 [2024-07-25 01:21:47.348641] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:25.601 01:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:25.601 01:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:25.601 01:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:25.601 01:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:21:25.601 01:21:48 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.601 01:21:48 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:25.861 Running I/O for 1 seconds... 00:21:26.800 00:21:26.800 Latency(us) 00:21:26.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.800 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:26.800 Verification LBA range: start 0x0 length 0x2000 00:21:26.800 nvme0n1 : 1.06 1064.67 4.16 0.00 0.00 117639.69 7208.96 175978.41 00:21:26.800 =================================================================================================================== 00:21:26.800 Total : 1064.67 4.16 0.00 0.00 117639.69 7208.96 175978.41 00:21:26.800 0 00:21:26.800 01:21:49 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:21:26.800 01:21:49 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:21:26.800 01:21:49 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:26.800 01:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:21:26.800 01:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:21:26.800 01:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:21:26.800 01:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:26.800 01:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:21:26.800 01:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:21:26.800 01:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:21:26.800 01:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:26.800 nvmf_trace.0 00:21:27.060 01:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:21:27.060 01:21:49 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 937171 00:21:27.060 01:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 937171 ']' 00:21:27.060 01:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 937171 00:21:27.060 01:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:27.060 01:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:27.060 01:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 937171 00:21:27.060 01:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:27.060 01:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:27.060 01:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 937171' 00:21:27.060 killing process with pid 937171 00:21:27.060 01:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 937171 00:21:27.060 Received shutdown signal, test time was about 1.000000 seconds 00:21:27.060 00:21:27.060 Latency(us) 00:21:27.060 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:27.060 =================================================================================================================== 00:21:27.060 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:27.060 01:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 937171 00:21:27.060 01:21:49 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:27.060 01:21:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:27.060 01:21:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:21:27.060 01:21:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:27.060 01:21:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:21:27.060 01:21:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:27.060 01:21:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:27.060 rmmod nvme_tcp 00:21:27.060 rmmod nvme_fabrics 00:21:27.060 rmmod nvme_keyring 00:21:27.060 01:21:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:27.060 01:21:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:21:27.060 01:21:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:21:27.060 01:21:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 936928 ']' 00:21:27.060 01:21:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 936928 00:21:27.060 01:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 936928 ']' 00:21:27.060 01:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 936928 00:21:27.321 01:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:27.321 01:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:27.321 01:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 936928 00:21:27.321 01:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:27.321 01:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:27.321 01:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 936928' 00:21:27.321 killing process with pid 936928 00:21:27.321 01:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 936928 00:21:27.321 01:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 936928 00:21:27.321 01:21:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:27.321 01:21:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:27.321 01:21:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:27.321 01:21:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:27.321 01:21:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:27.321 01:21:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.321 01:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:27.321 01:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:29.860 01:21:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:29.860 01:21:51 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.LkPMsUNZ92 /tmp/tmp.L5192VaEnJ /tmp/tmp.g1HZ5lEdaB 00:21:29.860 00:21:29.860 real 1m24.182s 00:21:29.860 user 2m10.598s 00:21:29.860 sys 0m27.518s 00:21:29.860 01:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:29.860 01:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:29.860 ************************************ 00:21:29.860 END TEST nvmf_tls 00:21:29.860 ************************************ 00:21:29.860 01:21:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:29.860 01:21:51 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:29.860 01:21:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:29.860 01:21:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:29.860 01:21:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:29.860 ************************************ 00:21:29.860 START TEST nvmf_fips 00:21:29.860 ************************************ 00:21:29.861 01:21:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:29.861 * Looking for test storage... 00:21:29.861 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:29.861 01:21:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:21:29.862 01:21:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:21:29.862 01:21:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:21:29.862 01:21:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:21:29.862 01:21:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:21:29.862 01:21:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:21:29.862 01:21:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:29.862 01:21:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:21:29.862 01:21:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:21:29.862 01:21:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:21:29.862 01:21:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:29.862 01:21:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:21:29.862 01:21:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:29.862 01:21:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:21:29.862 01:21:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:29.862 01:21:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:21:29.862 01:21:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:29.862 01:21:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:21:29.862 01:21:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:21:29.862 01:21:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:21:29.862 Error setting digest 00:21:29.862 00D23BFED67F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:29.862 00D23BFED67F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:29.862 01:21:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:21:29.862 01:21:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:29.862 01:21:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:29.862 01:21:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:29.862 01:21:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:21:29.862 01:21:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:29.862 01:21:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:29.862 01:21:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:29.862 01:21:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:29.862 01:21:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:29.862 01:21:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:29.862 01:21:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:29.862 01:21:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:29.862 01:21:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:29.862 01:21:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:29.862 01:21:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:21:29.862 01:21:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:35.133 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:35.133 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:35.133 Found net devices under 0000:86:00.0: cvl_0_0 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:35.133 Found net devices under 0000:86:00.1: cvl_0_1 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:35.133 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:35.133 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:35.133 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:21:35.133 00:21:35.133 --- 10.0.0.2 ping statistics --- 00:21:35.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.134 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:21:35.134 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:35.134 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:35.134 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.383 ms 00:21:35.134 00:21:35.134 --- 10.0.0.1 ping statistics --- 00:21:35.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.134 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:21:35.134 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:35.134 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:21:35.134 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:35.134 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:35.134 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:35.134 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:35.134 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:35.134 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:35.134 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:35.134 01:21:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:21:35.134 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:35.134 01:21:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:35.134 01:21:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:35.134 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=940965 00:21:35.134 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 940965 00:21:35.134 01:21:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 940965 ']' 00:21:35.134 01:21:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.134 01:21:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:35.134 01:21:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.134 01:21:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:35.134 01:21:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:35.134 01:21:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:35.134 [2024-07-25 01:21:57.334289] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:21:35.134 [2024-07-25 01:21:57.334335] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.134 EAL: No free 2048 kB hugepages reported on node 1 00:21:35.134 [2024-07-25 01:21:57.390205] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.134 [2024-07-25 01:21:57.467648] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:35.134 [2024-07-25 01:21:57.467683] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:35.134 [2024-07-25 01:21:57.467690] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:35.134 [2024-07-25 01:21:57.467696] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:35.134 [2024-07-25 01:21:57.467701] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:35.134 [2024-07-25 01:21:57.467718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.701 01:21:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:35.701 01:21:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:21:35.701 01:21:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:35.701 01:21:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:35.701 01:21:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:35.701 01:21:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:35.701 01:21:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:21:35.701 01:21:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:35.701 01:21:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:35.701 01:21:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:35.701 01:21:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:35.701 01:21:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:35.701 01:21:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:35.701 01:21:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:35.960 [2024-07-25 01:21:58.307552] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:35.960 [2024-07-25 01:21:58.323529] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:35.960 [2024-07-25 01:21:58.323679] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:35.960 [2024-07-25 01:21:58.351725] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:35.960 malloc0 00:21:35.960 01:21:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:35.960 01:21:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=941210 00:21:35.960 01:21:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 941210 /var/tmp/bdevperf.sock 00:21:35.960 01:21:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:35.960 01:21:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 941210 ']' 00:21:35.960 01:21:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:35.960 01:21:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:35.960 01:21:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:35.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:35.960 01:21:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:35.960 01:21:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:35.960 [2024-07-25 01:21:58.429825] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:21:35.960 [2024-07-25 01:21:58.429873] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid941210 ] 00:21:35.960 EAL: No free 2048 kB hugepages reported on node 1 00:21:36.219 [2024-07-25 01:21:58.480362] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.219 [2024-07-25 01:21:58.553237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:36.785 01:21:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:36.785 01:21:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:21:36.785 01:21:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:37.043 [2024-07-25 01:21:59.366811] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:37.043 [2024-07-25 01:21:59.366889] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:37.043 TLSTESTn1 00:21:37.043 01:21:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:37.302 Running I/O for 10 seconds... 00:21:47.302 00:21:47.302 Latency(us) 00:21:47.302 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:47.302 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:47.302 Verification LBA range: start 0x0 length 0x2000 00:21:47.302 TLSTESTn1 : 10.09 1366.26 5.34 0.00 0.00 93347.67 6126.19 176890.21 00:21:47.302 =================================================================================================================== 00:21:47.302 Total : 1366.26 5.34 0.00 0.00 93347.67 6126.19 176890.21 00:21:47.302 0 00:21:47.302 01:22:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:47.302 01:22:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:47.302 01:22:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:21:47.302 01:22:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:21:47.302 01:22:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:21:47.302 01:22:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:47.302 01:22:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:21:47.302 01:22:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:21:47.302 01:22:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:21:47.302 01:22:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:47.302 nvmf_trace.0 00:21:47.302 01:22:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:21:47.302 01:22:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 941210 00:21:47.302 01:22:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 941210 ']' 00:21:47.302 01:22:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 941210 00:21:47.302 01:22:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:21:47.302 01:22:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:47.302 01:22:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 941210 00:21:47.562 01:22:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:47.562 01:22:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:47.562 01:22:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 941210' 00:21:47.562 killing process with pid 941210 00:21:47.562 01:22:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 941210 00:21:47.562 Received shutdown signal, test time was about 10.000000 seconds 00:21:47.562 00:21:47.562 Latency(us) 00:21:47.562 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:47.562 =================================================================================================================== 00:21:47.562 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:47.562 [2024-07-25 01:22:09.818367] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:47.562 01:22:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 941210 00:21:47.562 01:22:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:47.562 01:22:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:47.562 01:22:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:21:47.562 01:22:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:47.562 01:22:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:21:47.562 01:22:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:47.562 01:22:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:47.562 rmmod nvme_tcp 00:21:47.562 rmmod nvme_fabrics 00:21:47.562 rmmod nvme_keyring 00:21:47.562 01:22:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:47.820 01:22:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:21:47.820 01:22:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:21:47.820 01:22:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 940965 ']' 00:21:47.820 01:22:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 940965 00:21:47.820 01:22:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 940965 ']' 00:21:47.820 01:22:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 940965 00:21:47.820 01:22:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:21:47.820 01:22:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:47.820 01:22:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 940965 00:21:47.820 01:22:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:47.820 01:22:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:47.820 01:22:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 940965' 00:21:47.820 killing process with pid 940965 00:21:47.820 01:22:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 940965 00:21:47.820 [2024-07-25 01:22:10.108147] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:47.820 01:22:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 940965 00:21:47.820 01:22:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:47.820 01:22:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:47.820 01:22:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:47.820 01:22:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:47.820 01:22:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:47.820 01:22:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.820 01:22:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:47.820 01:22:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.358 01:22:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:50.359 01:22:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:50.359 00:21:50.359 real 0m20.445s 00:21:50.359 user 0m23.100s 00:21:50.359 sys 0m8.198s 00:21:50.359 01:22:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:50.359 01:22:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:50.359 ************************************ 00:21:50.359 END TEST nvmf_fips 00:21:50.359 ************************************ 00:21:50.359 01:22:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:50.359 01:22:12 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:21:50.359 01:22:12 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:21:50.359 01:22:12 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:21:50.359 01:22:12 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:21:50.359 01:22:12 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:21:50.359 01:22:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:55.641 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:55.641 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:55.641 Found net devices under 0000:86:00.0: cvl_0_0 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:55.641 Found net devices under 0000:86:00.1: cvl_0_1 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:55.641 01:22:17 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:55.642 01:22:17 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:21:55.642 01:22:17 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:55.642 01:22:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:55.642 01:22:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:55.642 01:22:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:55.642 ************************************ 00:21:55.642 START TEST nvmf_perf_adq 00:21:55.642 ************************************ 00:21:55.642 01:22:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:55.642 * Looking for test storage... 00:21:55.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:55.642 01:22:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:55.642 01:22:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:55.642 01:22:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:55.642 01:22:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:55.642 01:22:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:55.642 01:22:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:55.642 01:22:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:55.642 01:22:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:55.642 01:22:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:55.642 01:22:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:55.642 01:22:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:55.642 01:22:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:55.642 01:22:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:55.642 01:22:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:55.642 01:22:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:55.642 01:22:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:55.642 01:22:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:55.642 01:22:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:55.642 01:22:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:55.642 01:22:17 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:55.642 01:22:17 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:55.642 01:22:17 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:55.642 01:22:17 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.642 01:22:17 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.642 01:22:17 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.642 01:22:17 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:55.642 01:22:17 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.642 01:22:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:21:55.642 01:22:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:55.642 01:22:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:55.642 01:22:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:55.642 01:22:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:55.642 01:22:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:55.642 01:22:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:55.642 01:22:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:55.642 01:22:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:55.642 01:22:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:55.642 01:22:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:55.642 01:22:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:00.939 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:00.939 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:00.939 Found net devices under 0000:86:00.0: cvl_0_0 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:00.939 Found net devices under 0000:86:00.1: cvl_0_1 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:22:00.939 01:22:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:01.199 01:22:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:03.110 01:22:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:08.394 01:22:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:22:08.394 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:08.394 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:08.394 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:08.394 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:08.394 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:08.394 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.394 01:22:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:08.394 01:22:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.394 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:08.394 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:08.394 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:08.394 01:22:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:08.394 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:08.394 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:08.394 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:08.394 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:08.394 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:08.394 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:08.394 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:08.394 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:08.394 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:08.394 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:08.394 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:08.394 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:08.395 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:08.395 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:08.395 Found net devices under 0000:86:00.0: cvl_0_0 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:08.395 Found net devices under 0000:86:00.1: cvl_0_1 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:08.395 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:08.395 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:22:08.395 00:22:08.395 --- 10.0.0.2 ping statistics --- 00:22:08.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.395 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:08.395 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:08.395 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:22:08.395 00:22:08.395 --- 10.0.0.1 ping statistics --- 00:22:08.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.395 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=950925 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 950925 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 950925 ']' 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:08.395 01:22:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:08.395 [2024-07-25 01:22:30.720150] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:22:08.395 [2024-07-25 01:22:30.720196] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:08.395 EAL: No free 2048 kB hugepages reported on node 1 00:22:08.395 [2024-07-25 01:22:30.778443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:08.395 [2024-07-25 01:22:30.857377] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:08.396 [2024-07-25 01:22:30.857418] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:08.396 [2024-07-25 01:22:30.857425] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:08.396 [2024-07-25 01:22:30.857430] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:08.396 [2024-07-25 01:22:30.857436] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:08.396 [2024-07-25 01:22:30.857481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:08.396 [2024-07-25 01:22:30.857556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:08.396 [2024-07-25 01:22:30.857644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:08.396 [2024-07-25 01:22:30.857646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:09.335 01:22:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:09.335 01:22:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:22:09.335 01:22:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:09.335 01:22:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:09.335 01:22:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:09.335 01:22:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:09.335 01:22:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:22:09.335 01:22:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:09.335 01:22:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:09.335 01:22:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.335 01:22:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:09.335 01:22:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.335 01:22:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:09.335 01:22:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:09.335 01:22:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.335 01:22:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:09.335 01:22:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.335 01:22:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:09.335 01:22:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.335 01:22:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:09.335 01:22:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.335 01:22:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:09.335 01:22:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.335 01:22:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:09.335 [2024-07-25 01:22:31.715691] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:09.335 01:22:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.335 01:22:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:09.335 01:22:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.335 01:22:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:09.335 Malloc1 00:22:09.335 01:22:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.335 01:22:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:09.335 01:22:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.335 01:22:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:09.335 01:22:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.335 01:22:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:09.335 01:22:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.335 01:22:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:09.335 01:22:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.335 01:22:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:09.335 01:22:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.335 01:22:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:09.335 [2024-07-25 01:22:31.762136] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:09.335 01:22:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.335 01:22:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=951174 00:22:09.335 01:22:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:22:09.335 01:22:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:09.335 EAL: No free 2048 kB hugepages reported on node 1 00:22:11.881 01:22:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:22:11.881 01:22:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.881 01:22:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:11.881 01:22:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.881 01:22:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:22:11.881 "tick_rate": 2300000000, 00:22:11.881 "poll_groups": [ 00:22:11.881 { 00:22:11.881 "name": "nvmf_tgt_poll_group_000", 00:22:11.881 "admin_qpairs": 1, 00:22:11.881 "io_qpairs": 1, 00:22:11.881 "current_admin_qpairs": 1, 00:22:11.881 "current_io_qpairs": 1, 00:22:11.881 "pending_bdev_io": 0, 00:22:11.881 "completed_nvme_io": 20463, 00:22:11.881 "transports": [ 00:22:11.881 { 00:22:11.881 "trtype": "TCP" 00:22:11.881 } 00:22:11.881 ] 00:22:11.881 }, 00:22:11.881 { 00:22:11.881 "name": "nvmf_tgt_poll_group_001", 00:22:11.881 "admin_qpairs": 0, 00:22:11.881 "io_qpairs": 1, 00:22:11.881 "current_admin_qpairs": 0, 00:22:11.881 "current_io_qpairs": 1, 00:22:11.881 "pending_bdev_io": 0, 00:22:11.881 "completed_nvme_io": 20192, 00:22:11.881 "transports": [ 00:22:11.881 { 00:22:11.881 "trtype": "TCP" 00:22:11.881 } 00:22:11.881 ] 00:22:11.881 }, 00:22:11.881 { 00:22:11.881 "name": "nvmf_tgt_poll_group_002", 00:22:11.881 "admin_qpairs": 0, 00:22:11.881 "io_qpairs": 1, 00:22:11.881 "current_admin_qpairs": 0, 00:22:11.881 "current_io_qpairs": 1, 00:22:11.881 "pending_bdev_io": 0, 00:22:11.881 "completed_nvme_io": 18274, 00:22:11.881 "transports": [ 00:22:11.881 { 00:22:11.881 "trtype": "TCP" 00:22:11.881 } 00:22:11.881 ] 00:22:11.881 }, 00:22:11.881 { 00:22:11.881 "name": "nvmf_tgt_poll_group_003", 00:22:11.881 "admin_qpairs": 0, 00:22:11.881 "io_qpairs": 1, 00:22:11.881 "current_admin_qpairs": 0, 00:22:11.881 "current_io_qpairs": 1, 00:22:11.881 "pending_bdev_io": 0, 00:22:11.881 "completed_nvme_io": 17188, 00:22:11.881 "transports": [ 00:22:11.881 { 00:22:11.881 "trtype": "TCP" 00:22:11.881 } 00:22:11.881 ] 00:22:11.881 } 00:22:11.881 ] 00:22:11.881 }' 00:22:11.881 01:22:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:11.881 01:22:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:22:11.881 01:22:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:22:11.881 01:22:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:22:11.881 01:22:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 951174 00:22:20.083 Initializing NVMe Controllers 00:22:20.083 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:20.083 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:20.083 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:20.083 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:20.083 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:20.083 Initialization complete. Launching workers. 00:22:20.083 ======================================================== 00:22:20.083 Latency(us) 00:22:20.083 Device Information : IOPS MiB/s Average min max 00:22:20.083 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9455.70 36.94 6769.36 2069.97 14940.77 00:22:20.083 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10847.00 42.37 5900.61 1442.87 11694.90 00:22:20.083 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9995.70 39.05 6402.61 1568.79 14519.12 00:22:20.083 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10979.80 42.89 5830.12 1752.35 10958.99 00:22:20.083 ======================================================== 00:22:20.083 Total : 41278.20 161.24 6202.43 1442.87 14940.77 00:22:20.083 00:22:20.083 01:22:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:22:20.083 01:22:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:20.083 01:22:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:20.083 01:22:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:20.083 01:22:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:20.083 01:22:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:20.083 01:22:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:20.083 rmmod nvme_tcp 00:22:20.083 rmmod nvme_fabrics 00:22:20.083 rmmod nvme_keyring 00:22:20.083 01:22:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:20.083 01:22:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:20.083 01:22:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:20.083 01:22:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 950925 ']' 00:22:20.083 01:22:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 950925 00:22:20.083 01:22:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 950925 ']' 00:22:20.083 01:22:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 950925 00:22:20.083 01:22:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:22:20.083 01:22:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:20.083 01:22:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 950925 00:22:20.083 01:22:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:20.083 01:22:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:20.083 01:22:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 950925' 00:22:20.083 killing process with pid 950925 00:22:20.083 01:22:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 950925 00:22:20.083 01:22:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 950925 00:22:20.083 01:22:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:20.083 01:22:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:20.083 01:22:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:20.083 01:22:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:20.083 01:22:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:20.083 01:22:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.083 01:22:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:20.083 01:22:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.002 01:22:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:22.002 01:22:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:22:22.002 01:22:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:22.941 01:22:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:24.850 01:22:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:30.136 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:30.136 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:30.136 Found net devices under 0000:86:00.0: cvl_0_0 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:30.136 Found net devices under 0000:86:00.1: cvl_0_1 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:30.136 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:30.136 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:30.136 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:22:30.136 00:22:30.136 --- 10.0.0.2 ping statistics --- 00:22:30.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.137 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:22:30.137 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:30.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:30.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:22:30.137 00:22:30.137 --- 10.0.0.1 ping statistics --- 00:22:30.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.137 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:22:30.137 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:30.137 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:30.137 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:30.137 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:30.137 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:30.137 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:30.137 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:30.137 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:30.137 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:30.137 01:22:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:22:30.137 01:22:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:30.137 01:22:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:30.137 01:22:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:30.137 net.core.busy_poll = 1 00:22:30.137 01:22:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:30.137 net.core.busy_read = 1 00:22:30.137 01:22:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:30.137 01:22:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:30.137 01:22:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:30.137 01:22:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:30.137 01:22:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:30.397 01:22:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:30.397 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:30.397 01:22:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:30.397 01:22:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:30.397 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=954926 00:22:30.397 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 954926 00:22:30.397 01:22:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:30.397 01:22:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 954926 ']' 00:22:30.397 01:22:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.397 01:22:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:30.397 01:22:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.397 01:22:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:30.397 01:22:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:30.397 [2024-07-25 01:22:52.702860] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:22:30.397 [2024-07-25 01:22:52.702908] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:30.397 EAL: No free 2048 kB hugepages reported on node 1 00:22:30.397 [2024-07-25 01:22:52.760670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:30.397 [2024-07-25 01:22:52.833604] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:30.397 [2024-07-25 01:22:52.833647] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:30.397 [2024-07-25 01:22:52.833654] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:30.397 [2024-07-25 01:22:52.833660] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:30.397 [2024-07-25 01:22:52.833666] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:30.397 [2024-07-25 01:22:52.833711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:30.397 [2024-07-25 01:22:52.833807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:30.397 [2024-07-25 01:22:52.833874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:30.397 [2024-07-25 01:22:52.833875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.338 01:22:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:31.338 01:22:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:22:31.338 01:22:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:31.339 01:22:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:31.339 01:22:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.339 01:22:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:31.339 01:22:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:22:31.339 01:22:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:31.339 01:22:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:31.339 01:22:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.339 01:22:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.339 01:22:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.339 01:22:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:31.339 01:22:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:31.339 01:22:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.339 01:22:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.339 01:22:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.339 01:22:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:31.339 01:22:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.339 01:22:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.339 01:22:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.339 01:22:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:31.339 01:22:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.339 01:22:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.339 [2024-07-25 01:22:53.699934] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:31.339 01:22:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.339 01:22:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:31.339 01:22:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.339 01:22:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.339 Malloc1 00:22:31.339 01:22:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.339 01:22:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:31.339 01:22:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.339 01:22:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.339 01:22:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.339 01:22:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:31.339 01:22:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.339 01:22:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.339 01:22:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.339 01:22:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:31.339 01:22:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.339 01:22:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.339 [2024-07-25 01:22:53.747918] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:31.339 01:22:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.339 01:22:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=955007 00:22:31.339 01:22:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:22:31.339 01:22:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:31.339 EAL: No free 2048 kB hugepages reported on node 1 00:22:33.879 01:22:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:22:33.879 01:22:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.879 01:22:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:33.879 01:22:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.879 01:22:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:22:33.879 "tick_rate": 2300000000, 00:22:33.879 "poll_groups": [ 00:22:33.879 { 00:22:33.879 "name": "nvmf_tgt_poll_group_000", 00:22:33.879 "admin_qpairs": 1, 00:22:33.879 "io_qpairs": 3, 00:22:33.879 "current_admin_qpairs": 1, 00:22:33.879 "current_io_qpairs": 3, 00:22:33.879 "pending_bdev_io": 0, 00:22:33.879 "completed_nvme_io": 31295, 00:22:33.879 "transports": [ 00:22:33.879 { 00:22:33.879 "trtype": "TCP" 00:22:33.879 } 00:22:33.879 ] 00:22:33.879 }, 00:22:33.879 { 00:22:33.879 "name": "nvmf_tgt_poll_group_001", 00:22:33.879 "admin_qpairs": 0, 00:22:33.879 "io_qpairs": 1, 00:22:33.879 "current_admin_qpairs": 0, 00:22:33.880 "current_io_qpairs": 1, 00:22:33.880 "pending_bdev_io": 0, 00:22:33.880 "completed_nvme_io": 20982, 00:22:33.880 "transports": [ 00:22:33.880 { 00:22:33.880 "trtype": "TCP" 00:22:33.880 } 00:22:33.880 ] 00:22:33.880 }, 00:22:33.880 { 00:22:33.880 "name": "nvmf_tgt_poll_group_002", 00:22:33.880 "admin_qpairs": 0, 00:22:33.880 "io_qpairs": 0, 00:22:33.880 "current_admin_qpairs": 0, 00:22:33.880 "current_io_qpairs": 0, 00:22:33.880 "pending_bdev_io": 0, 00:22:33.880 "completed_nvme_io": 0, 00:22:33.880 "transports": [ 00:22:33.880 { 00:22:33.880 "trtype": "TCP" 00:22:33.880 } 00:22:33.880 ] 00:22:33.880 }, 00:22:33.880 { 00:22:33.880 "name": "nvmf_tgt_poll_group_003", 00:22:33.880 "admin_qpairs": 0, 00:22:33.880 "io_qpairs": 0, 00:22:33.880 "current_admin_qpairs": 0, 00:22:33.880 "current_io_qpairs": 0, 00:22:33.880 "pending_bdev_io": 0, 00:22:33.880 "completed_nvme_io": 0, 00:22:33.880 "transports": [ 00:22:33.880 { 00:22:33.880 "trtype": "TCP" 00:22:33.880 } 00:22:33.880 ] 00:22:33.880 } 00:22:33.880 ] 00:22:33.880 }' 00:22:33.880 01:22:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:33.880 01:22:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:22:33.880 01:22:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:22:33.880 01:22:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:22:33.880 01:22:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 955007 00:22:42.008 Initializing NVMe Controllers 00:22:42.008 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:42.008 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:42.008 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:42.008 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:42.008 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:42.008 Initialization complete. Launching workers. 00:22:42.008 ======================================================== 00:22:42.008 Latency(us) 00:22:42.008 Device Information : IOPS MiB/s Average min max 00:22:42.008 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5320.60 20.78 12036.84 2240.99 57881.36 00:22:42.008 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5847.00 22.84 10983.43 1656.59 57486.99 00:22:42.008 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11410.40 44.57 5626.54 1768.65 47682.85 00:22:42.008 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5766.90 22.53 11102.26 2148.27 57930.97 00:22:42.008 ======================================================== 00:22:42.008 Total : 28344.90 110.72 9048.89 1656.59 57930.97 00:22:42.008 00:22:42.008 01:23:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:22:42.008 01:23:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:42.008 01:23:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:42.008 01:23:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:42.008 01:23:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:42.008 01:23:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:42.008 01:23:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:42.008 rmmod nvme_tcp 00:22:42.008 rmmod nvme_fabrics 00:22:42.008 rmmod nvme_keyring 00:22:42.008 01:23:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:42.008 01:23:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:42.008 01:23:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:42.008 01:23:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 954926 ']' 00:22:42.008 01:23:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 954926 00:22:42.008 01:23:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 954926 ']' 00:22:42.008 01:23:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 954926 00:22:42.008 01:23:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:22:42.008 01:23:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:42.008 01:23:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 954926 00:22:42.008 01:23:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:42.008 01:23:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:42.008 01:23:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 954926' 00:22:42.008 killing process with pid 954926 00:22:42.008 01:23:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 954926 00:22:42.008 01:23:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 954926 00:22:42.008 01:23:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:42.008 01:23:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:42.008 01:23:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:42.008 01:23:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:42.008 01:23:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:42.008 01:23:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.008 01:23:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:42.008 01:23:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.345 01:23:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:45.345 01:23:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:22:45.345 00:22:45.345 real 0m50.268s 00:22:45.345 user 2m49.570s 00:22:45.345 sys 0m9.866s 00:22:45.345 01:23:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:45.345 01:23:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:45.345 ************************************ 00:22:45.345 END TEST nvmf_perf_adq 00:22:45.345 ************************************ 00:22:45.345 01:23:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:45.345 01:23:07 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:45.345 01:23:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:45.345 01:23:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:45.345 01:23:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:45.345 ************************************ 00:22:45.345 START TEST nvmf_shutdown 00:22:45.345 ************************************ 00:22:45.345 01:23:07 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:45.345 * Looking for test storage... 00:22:45.345 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:45.345 01:23:07 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:45.345 01:23:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:45.345 01:23:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:45.345 01:23:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:45.345 01:23:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:45.345 01:23:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:45.345 01:23:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:45.345 01:23:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:45.345 01:23:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:45.345 01:23:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:45.345 01:23:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:45.345 01:23:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:45.345 01:23:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:45.345 01:23:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:45.345 01:23:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:45.345 01:23:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:45.345 01:23:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:45.345 01:23:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:45.345 01:23:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:45.345 01:23:07 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:45.345 01:23:07 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:45.345 01:23:07 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:45.345 01:23:07 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.345 01:23:07 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.345 01:23:07 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.345 01:23:07 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:45.345 01:23:07 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.345 01:23:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:22:45.345 01:23:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:45.345 01:23:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:45.345 01:23:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:45.345 01:23:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:45.345 01:23:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:45.345 01:23:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:45.345 01:23:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:45.345 01:23:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:45.345 01:23:07 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:45.345 01:23:07 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:45.345 01:23:07 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:45.345 01:23:07 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:45.346 01:23:07 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:45.346 01:23:07 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:45.346 ************************************ 00:22:45.346 START TEST nvmf_shutdown_tc1 00:22:45.346 ************************************ 00:22:45.346 01:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:22:45.346 01:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:22:45.346 01:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:45.346 01:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:45.346 01:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:45.346 01:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:45.346 01:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:45.346 01:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:45.346 01:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.346 01:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:45.346 01:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.346 01:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:45.346 01:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:45.346 01:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:45.346 01:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:50.630 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:50.630 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:50.630 Found net devices under 0000:86:00.0: cvl_0_0 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:50.630 Found net devices under 0000:86:00.1: cvl_0_1 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:50.630 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:50.631 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:50.631 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:50.631 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:50.631 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:50.631 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:50.631 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:50.631 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:50.631 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:50.631 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:50.631 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:50.631 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:50.631 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:50.631 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:50.631 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:50.631 01:23:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:50.631 01:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:50.631 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:50.631 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:22:50.631 00:22:50.631 --- 10.0.0.2 ping statistics --- 00:22:50.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.631 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:22:50.631 01:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:50.631 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:50.631 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.383 ms 00:22:50.631 00:22:50.631 --- 10.0.0.1 ping statistics --- 00:22:50.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.631 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:22:50.631 01:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:50.631 01:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:22:50.631 01:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:50.631 01:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:50.631 01:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:50.631 01:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:50.631 01:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:50.631 01:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:50.631 01:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:50.631 01:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:50.631 01:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:50.631 01:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:50.631 01:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:50.631 01:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=960426 00:22:50.631 01:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 960426 00:22:50.631 01:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:50.631 01:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 960426 ']' 00:22:50.631 01:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.631 01:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:50.631 01:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.631 01:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:50.631 01:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:50.891 [2024-07-25 01:23:13.128555] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:22:50.891 [2024-07-25 01:23:13.128598] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:50.891 EAL: No free 2048 kB hugepages reported on node 1 00:22:50.891 [2024-07-25 01:23:13.184955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:50.891 [2024-07-25 01:23:13.265255] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:50.891 [2024-07-25 01:23:13.265291] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:50.891 [2024-07-25 01:23:13.265299] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:50.891 [2024-07-25 01:23:13.265305] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:50.891 [2024-07-25 01:23:13.265310] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:50.891 [2024-07-25 01:23:13.265405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:50.891 [2024-07-25 01:23:13.265487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:50.891 [2024-07-25 01:23:13.265592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:50.891 [2024-07-25 01:23:13.265593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:51.459 01:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:51.459 01:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:22:51.459 01:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:51.459 01:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:51.459 01:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:51.719 01:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:51.719 01:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:51.719 01:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.719 01:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:51.719 [2024-07-25 01:23:13.990144] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:51.719 01:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.719 01:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:51.719 01:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:51.719 01:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:51.719 01:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:51.719 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:51.719 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.719 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:51.719 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.719 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:51.719 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.719 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:51.719 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.719 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:51.719 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.719 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:51.719 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.719 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:51.719 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.719 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:51.719 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.719 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:51.719 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.719 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:51.719 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.719 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:51.719 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:51.719 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.719 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:51.719 Malloc1 00:22:51.719 [2024-07-25 01:23:14.086051] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:51.719 Malloc2 00:22:51.719 Malloc3 00:22:51.719 Malloc4 00:22:51.981 Malloc5 00:22:51.981 Malloc6 00:22:51.981 Malloc7 00:22:51.981 Malloc8 00:22:51.982 Malloc9 00:22:51.982 Malloc10 00:22:52.243 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.243 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:52.243 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:52.243 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:52.243 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=960706 00:22:52.243 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 960706 /var/tmp/bdevperf.sock 00:22:52.243 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 960706 ']' 00:22:52.243 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:52.243 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:52.243 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:52.243 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:52.243 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:52.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:52.243 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:52.243 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:52.243 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:52.243 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:52.243 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:52.243 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:52.243 { 00:22:52.243 "params": { 00:22:52.243 "name": "Nvme$subsystem", 00:22:52.243 "trtype": "$TEST_TRANSPORT", 00:22:52.243 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.243 "adrfam": "ipv4", 00:22:52.243 "trsvcid": "$NVMF_PORT", 00:22:52.243 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.243 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.243 "hdgst": ${hdgst:-false}, 00:22:52.243 "ddgst": ${ddgst:-false} 00:22:52.243 }, 00:22:52.243 "method": "bdev_nvme_attach_controller" 00:22:52.243 } 00:22:52.243 EOF 00:22:52.243 )") 00:22:52.243 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:52.243 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:52.243 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:52.243 { 00:22:52.243 "params": { 00:22:52.243 "name": "Nvme$subsystem", 00:22:52.243 "trtype": "$TEST_TRANSPORT", 00:22:52.243 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.243 "adrfam": "ipv4", 00:22:52.243 "trsvcid": "$NVMF_PORT", 00:22:52.243 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.243 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.243 "hdgst": ${hdgst:-false}, 00:22:52.243 "ddgst": ${ddgst:-false} 00:22:52.243 }, 00:22:52.243 "method": "bdev_nvme_attach_controller" 00:22:52.243 } 00:22:52.243 EOF 00:22:52.243 )") 00:22:52.243 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:52.243 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:52.243 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:52.243 { 00:22:52.243 "params": { 00:22:52.243 "name": "Nvme$subsystem", 00:22:52.243 "trtype": "$TEST_TRANSPORT", 00:22:52.243 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.243 "adrfam": "ipv4", 00:22:52.243 "trsvcid": "$NVMF_PORT", 00:22:52.243 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.243 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.243 "hdgst": ${hdgst:-false}, 00:22:52.243 "ddgst": ${ddgst:-false} 00:22:52.243 }, 00:22:52.243 "method": "bdev_nvme_attach_controller" 00:22:52.243 } 00:22:52.243 EOF 00:22:52.243 )") 00:22:52.243 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:52.243 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:52.243 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:52.243 { 00:22:52.243 "params": { 00:22:52.243 "name": "Nvme$subsystem", 00:22:52.243 "trtype": "$TEST_TRANSPORT", 00:22:52.243 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.243 "adrfam": "ipv4", 00:22:52.243 "trsvcid": "$NVMF_PORT", 00:22:52.243 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.243 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.243 "hdgst": ${hdgst:-false}, 00:22:52.243 "ddgst": ${ddgst:-false} 00:22:52.243 }, 00:22:52.243 "method": "bdev_nvme_attach_controller" 00:22:52.243 } 00:22:52.243 EOF 00:22:52.243 )") 00:22:52.243 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:52.243 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:52.243 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:52.243 { 00:22:52.243 "params": { 00:22:52.243 "name": "Nvme$subsystem", 00:22:52.243 "trtype": "$TEST_TRANSPORT", 00:22:52.243 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.243 "adrfam": "ipv4", 00:22:52.243 "trsvcid": "$NVMF_PORT", 00:22:52.243 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.243 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.243 "hdgst": ${hdgst:-false}, 00:22:52.243 "ddgst": ${ddgst:-false} 00:22:52.243 }, 00:22:52.243 "method": "bdev_nvme_attach_controller" 00:22:52.243 } 00:22:52.243 EOF 00:22:52.243 )") 00:22:52.243 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:52.243 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:52.243 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:52.243 { 00:22:52.243 "params": { 00:22:52.243 "name": "Nvme$subsystem", 00:22:52.243 "trtype": "$TEST_TRANSPORT", 00:22:52.243 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.243 "adrfam": "ipv4", 00:22:52.243 "trsvcid": "$NVMF_PORT", 00:22:52.243 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.243 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.243 "hdgst": ${hdgst:-false}, 00:22:52.243 "ddgst": ${ddgst:-false} 00:22:52.243 }, 00:22:52.243 "method": "bdev_nvme_attach_controller" 00:22:52.243 } 00:22:52.243 EOF 00:22:52.243 )") 00:22:52.243 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:52.244 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:52.244 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:52.244 { 00:22:52.244 "params": { 00:22:52.244 "name": "Nvme$subsystem", 00:22:52.244 "trtype": "$TEST_TRANSPORT", 00:22:52.244 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.244 "adrfam": "ipv4", 00:22:52.244 "trsvcid": "$NVMF_PORT", 00:22:52.244 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.244 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.244 "hdgst": ${hdgst:-false}, 00:22:52.244 "ddgst": ${ddgst:-false} 00:22:52.244 }, 00:22:52.244 "method": "bdev_nvme_attach_controller" 00:22:52.244 } 00:22:52.244 EOF 00:22:52.244 )") 00:22:52.244 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:52.244 [2024-07-25 01:23:14.568180] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:22:52.244 [2024-07-25 01:23:14.568230] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:52.244 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:52.244 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:52.244 { 00:22:52.244 "params": { 00:22:52.244 "name": "Nvme$subsystem", 00:22:52.244 "trtype": "$TEST_TRANSPORT", 00:22:52.244 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.244 "adrfam": "ipv4", 00:22:52.244 "trsvcid": "$NVMF_PORT", 00:22:52.244 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.244 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.244 "hdgst": ${hdgst:-false}, 00:22:52.244 "ddgst": ${ddgst:-false} 00:22:52.244 }, 00:22:52.244 "method": "bdev_nvme_attach_controller" 00:22:52.244 } 00:22:52.244 EOF 00:22:52.244 )") 00:22:52.244 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:52.244 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:52.244 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:52.244 { 00:22:52.244 "params": { 00:22:52.244 "name": "Nvme$subsystem", 00:22:52.244 "trtype": "$TEST_TRANSPORT", 00:22:52.244 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.244 "adrfam": "ipv4", 00:22:52.244 "trsvcid": "$NVMF_PORT", 00:22:52.244 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.244 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.244 "hdgst": ${hdgst:-false}, 00:22:52.244 "ddgst": ${ddgst:-false} 00:22:52.244 }, 00:22:52.244 "method": "bdev_nvme_attach_controller" 00:22:52.244 } 00:22:52.244 EOF 00:22:52.244 )") 00:22:52.244 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:52.244 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:52.244 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:52.244 { 00:22:52.244 "params": { 00:22:52.244 "name": "Nvme$subsystem", 00:22:52.244 "trtype": "$TEST_TRANSPORT", 00:22:52.244 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.244 "adrfam": "ipv4", 00:22:52.244 "trsvcid": "$NVMF_PORT", 00:22:52.244 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.244 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.244 "hdgst": ${hdgst:-false}, 00:22:52.244 "ddgst": ${ddgst:-false} 00:22:52.244 }, 00:22:52.244 "method": "bdev_nvme_attach_controller" 00:22:52.244 } 00:22:52.244 EOF 00:22:52.244 )") 00:22:52.244 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:52.244 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:52.244 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.244 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:52.244 01:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:52.244 "params": { 00:22:52.244 "name": "Nvme1", 00:22:52.244 "trtype": "tcp", 00:22:52.244 "traddr": "10.0.0.2", 00:22:52.244 "adrfam": "ipv4", 00:22:52.244 "trsvcid": "4420", 00:22:52.244 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:52.244 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:52.244 "hdgst": false, 00:22:52.244 "ddgst": false 00:22:52.244 }, 00:22:52.244 "method": "bdev_nvme_attach_controller" 00:22:52.244 },{ 00:22:52.244 "params": { 00:22:52.244 "name": "Nvme2", 00:22:52.244 "trtype": "tcp", 00:22:52.244 "traddr": "10.0.0.2", 00:22:52.244 "adrfam": "ipv4", 00:22:52.244 "trsvcid": "4420", 00:22:52.244 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:52.244 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:52.244 "hdgst": false, 00:22:52.244 "ddgst": false 00:22:52.244 }, 00:22:52.244 "method": "bdev_nvme_attach_controller" 00:22:52.244 },{ 00:22:52.244 "params": { 00:22:52.244 "name": "Nvme3", 00:22:52.244 "trtype": "tcp", 00:22:52.244 "traddr": "10.0.0.2", 00:22:52.244 "adrfam": "ipv4", 00:22:52.244 "trsvcid": "4420", 00:22:52.244 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:52.244 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:52.244 "hdgst": false, 00:22:52.244 "ddgst": false 00:22:52.244 }, 00:22:52.244 "method": "bdev_nvme_attach_controller" 00:22:52.244 },{ 00:22:52.244 "params": { 00:22:52.244 "name": "Nvme4", 00:22:52.244 "trtype": "tcp", 00:22:52.244 "traddr": "10.0.0.2", 00:22:52.244 "adrfam": "ipv4", 00:22:52.244 "trsvcid": "4420", 00:22:52.244 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:52.244 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:52.244 "hdgst": false, 00:22:52.244 "ddgst": false 00:22:52.244 }, 00:22:52.244 "method": "bdev_nvme_attach_controller" 00:22:52.244 },{ 00:22:52.244 "params": { 00:22:52.244 "name": "Nvme5", 00:22:52.244 "trtype": "tcp", 00:22:52.244 "traddr": "10.0.0.2", 00:22:52.244 "adrfam": "ipv4", 00:22:52.244 "trsvcid": "4420", 00:22:52.244 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:52.244 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:52.244 "hdgst": false, 00:22:52.244 "ddgst": false 00:22:52.244 }, 00:22:52.244 "method": "bdev_nvme_attach_controller" 00:22:52.244 },{ 00:22:52.244 "params": { 00:22:52.244 "name": "Nvme6", 00:22:52.244 "trtype": "tcp", 00:22:52.244 "traddr": "10.0.0.2", 00:22:52.244 "adrfam": "ipv4", 00:22:52.244 "trsvcid": "4420", 00:22:52.244 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:52.244 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:52.244 "hdgst": false, 00:22:52.244 "ddgst": false 00:22:52.244 }, 00:22:52.244 "method": "bdev_nvme_attach_controller" 00:22:52.244 },{ 00:22:52.244 "params": { 00:22:52.244 "name": "Nvme7", 00:22:52.244 "trtype": "tcp", 00:22:52.244 "traddr": "10.0.0.2", 00:22:52.244 "adrfam": "ipv4", 00:22:52.244 "trsvcid": "4420", 00:22:52.244 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:52.244 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:52.244 "hdgst": false, 00:22:52.244 "ddgst": false 00:22:52.244 }, 00:22:52.244 "method": "bdev_nvme_attach_controller" 00:22:52.244 },{ 00:22:52.244 "params": { 00:22:52.244 "name": "Nvme8", 00:22:52.244 "trtype": "tcp", 00:22:52.244 "traddr": "10.0.0.2", 00:22:52.244 "adrfam": "ipv4", 00:22:52.244 "trsvcid": "4420", 00:22:52.244 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:52.244 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:52.244 "hdgst": false, 00:22:52.244 "ddgst": false 00:22:52.244 }, 00:22:52.244 "method": "bdev_nvme_attach_controller" 00:22:52.244 },{ 00:22:52.244 "params": { 00:22:52.244 "name": "Nvme9", 00:22:52.244 "trtype": "tcp", 00:22:52.244 "traddr": "10.0.0.2", 00:22:52.244 "adrfam": "ipv4", 00:22:52.244 "trsvcid": "4420", 00:22:52.244 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:52.244 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:52.244 "hdgst": false, 00:22:52.244 "ddgst": false 00:22:52.244 }, 00:22:52.244 "method": "bdev_nvme_attach_controller" 00:22:52.244 },{ 00:22:52.244 "params": { 00:22:52.244 "name": "Nvme10", 00:22:52.244 "trtype": "tcp", 00:22:52.244 "traddr": "10.0.0.2", 00:22:52.244 "adrfam": "ipv4", 00:22:52.244 "trsvcid": "4420", 00:22:52.244 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:52.244 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:52.244 "hdgst": false, 00:22:52.244 "ddgst": false 00:22:52.244 }, 00:22:52.244 "method": "bdev_nvme_attach_controller" 00:22:52.244 }' 00:22:52.244 [2024-07-25 01:23:14.625520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.244 [2024-07-25 01:23:14.700031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.154 01:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:54.154 01:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:22:54.154 01:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:54.154 01:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.154 01:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:54.154 01:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.154 01:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 960706 00:22:54.154 01:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:22:54.154 01:23:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:22:54.724 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 960706 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:54.724 01:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 960426 00:22:54.724 01:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:54.724 01:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:54.724 01:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:54.724 01:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:54.724 01:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:54.724 01:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:54.724 { 00:22:54.724 "params": { 00:22:54.724 "name": "Nvme$subsystem", 00:22:54.724 "trtype": "$TEST_TRANSPORT", 00:22:54.724 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.724 "adrfam": "ipv4", 00:22:54.724 "trsvcid": "$NVMF_PORT", 00:22:54.724 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.724 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.724 "hdgst": ${hdgst:-false}, 00:22:54.724 "ddgst": ${ddgst:-false} 00:22:54.724 }, 00:22:54.724 "method": "bdev_nvme_attach_controller" 00:22:54.724 } 00:22:54.724 EOF 00:22:54.724 )") 00:22:54.724 01:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:54.724 01:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:54.724 01:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:54.724 { 00:22:54.724 "params": { 00:22:54.724 "name": "Nvme$subsystem", 00:22:54.724 "trtype": "$TEST_TRANSPORT", 00:22:54.724 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.724 "adrfam": "ipv4", 00:22:54.724 "trsvcid": "$NVMF_PORT", 00:22:54.724 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.724 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.724 "hdgst": ${hdgst:-false}, 00:22:54.724 "ddgst": ${ddgst:-false} 00:22:54.724 }, 00:22:54.724 "method": "bdev_nvme_attach_controller" 00:22:54.724 } 00:22:54.724 EOF 00:22:54.724 )") 00:22:54.724 01:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:54.724 01:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:54.724 01:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:54.724 { 00:22:54.724 "params": { 00:22:54.724 "name": "Nvme$subsystem", 00:22:54.724 "trtype": "$TEST_TRANSPORT", 00:22:54.724 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.724 "adrfam": "ipv4", 00:22:54.724 "trsvcid": "$NVMF_PORT", 00:22:54.724 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.724 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.724 "hdgst": ${hdgst:-false}, 00:22:54.724 "ddgst": ${ddgst:-false} 00:22:54.724 }, 00:22:54.724 "method": "bdev_nvme_attach_controller" 00:22:54.724 } 00:22:54.724 EOF 00:22:54.724 )") 00:22:54.724 01:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:54.724 01:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:54.724 01:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:54.724 { 00:22:54.724 "params": { 00:22:54.724 "name": "Nvme$subsystem", 00:22:54.724 "trtype": "$TEST_TRANSPORT", 00:22:54.724 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.724 "adrfam": "ipv4", 00:22:54.724 "trsvcid": "$NVMF_PORT", 00:22:54.724 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.724 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.724 "hdgst": ${hdgst:-false}, 00:22:54.724 "ddgst": ${ddgst:-false} 00:22:54.724 }, 00:22:54.724 "method": "bdev_nvme_attach_controller" 00:22:54.724 } 00:22:54.724 EOF 00:22:54.724 )") 00:22:54.724 01:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:54.724 01:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:54.724 01:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:54.724 { 00:22:54.724 "params": { 00:22:54.724 "name": "Nvme$subsystem", 00:22:54.724 "trtype": "$TEST_TRANSPORT", 00:22:54.724 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.724 "adrfam": "ipv4", 00:22:54.724 "trsvcid": "$NVMF_PORT", 00:22:54.724 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.724 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.724 "hdgst": ${hdgst:-false}, 00:22:54.724 "ddgst": ${ddgst:-false} 00:22:54.724 }, 00:22:54.724 "method": "bdev_nvme_attach_controller" 00:22:54.724 } 00:22:54.724 EOF 00:22:54.724 )") 00:22:54.724 01:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:54.985 01:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:54.985 01:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:54.985 { 00:22:54.985 "params": { 00:22:54.985 "name": "Nvme$subsystem", 00:22:54.985 "trtype": "$TEST_TRANSPORT", 00:22:54.985 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.985 "adrfam": "ipv4", 00:22:54.985 "trsvcid": "$NVMF_PORT", 00:22:54.985 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.985 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.985 "hdgst": ${hdgst:-false}, 00:22:54.985 "ddgst": ${ddgst:-false} 00:22:54.985 }, 00:22:54.985 "method": "bdev_nvme_attach_controller" 00:22:54.985 } 00:22:54.985 EOF 00:22:54.985 )") 00:22:54.985 01:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:54.985 01:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:54.985 01:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:54.985 { 00:22:54.985 "params": { 00:22:54.985 "name": "Nvme$subsystem", 00:22:54.985 "trtype": "$TEST_TRANSPORT", 00:22:54.985 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.985 "adrfam": "ipv4", 00:22:54.985 "trsvcid": "$NVMF_PORT", 00:22:54.985 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.985 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.985 "hdgst": ${hdgst:-false}, 00:22:54.985 "ddgst": ${ddgst:-false} 00:22:54.985 }, 00:22:54.985 "method": "bdev_nvme_attach_controller" 00:22:54.985 } 00:22:54.985 EOF 00:22:54.985 )") 00:22:54.985 01:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:54.985 [2024-07-25 01:23:17.231519] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:22:54.986 [2024-07-25 01:23:17.231568] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid961189 ] 00:22:54.986 01:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:54.986 01:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:54.986 { 00:22:54.986 "params": { 00:22:54.986 "name": "Nvme$subsystem", 00:22:54.986 "trtype": "$TEST_TRANSPORT", 00:22:54.986 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.986 "adrfam": "ipv4", 00:22:54.986 "trsvcid": "$NVMF_PORT", 00:22:54.986 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.986 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.986 "hdgst": ${hdgst:-false}, 00:22:54.986 "ddgst": ${ddgst:-false} 00:22:54.986 }, 00:22:54.986 "method": "bdev_nvme_attach_controller" 00:22:54.986 } 00:22:54.986 EOF 00:22:54.986 )") 00:22:54.986 01:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:54.986 01:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:54.986 01:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:54.986 { 00:22:54.986 "params": { 00:22:54.986 "name": "Nvme$subsystem", 00:22:54.986 "trtype": "$TEST_TRANSPORT", 00:22:54.986 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.986 "adrfam": "ipv4", 00:22:54.986 "trsvcid": "$NVMF_PORT", 00:22:54.986 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.986 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.986 "hdgst": ${hdgst:-false}, 00:22:54.986 "ddgst": ${ddgst:-false} 00:22:54.986 }, 00:22:54.986 "method": "bdev_nvme_attach_controller" 00:22:54.986 } 00:22:54.986 EOF 00:22:54.986 )") 00:22:54.986 01:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:54.986 01:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:54.986 01:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:54.986 { 00:22:54.986 "params": { 00:22:54.986 "name": "Nvme$subsystem", 00:22:54.986 "trtype": "$TEST_TRANSPORT", 00:22:54.986 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.986 "adrfam": "ipv4", 00:22:54.986 "trsvcid": "$NVMF_PORT", 00:22:54.986 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.986 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.986 "hdgst": ${hdgst:-false}, 00:22:54.986 "ddgst": ${ddgst:-false} 00:22:54.986 }, 00:22:54.986 "method": "bdev_nvme_attach_controller" 00:22:54.986 } 00:22:54.986 EOF 00:22:54.986 )") 00:22:54.986 01:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:54.986 01:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:54.986 01:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:54.986 01:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:54.986 "params": { 00:22:54.986 "name": "Nvme1", 00:22:54.986 "trtype": "tcp", 00:22:54.986 "traddr": "10.0.0.2", 00:22:54.986 "adrfam": "ipv4", 00:22:54.986 "trsvcid": "4420", 00:22:54.986 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.986 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:54.986 "hdgst": false, 00:22:54.986 "ddgst": false 00:22:54.986 }, 00:22:54.986 "method": "bdev_nvme_attach_controller" 00:22:54.986 },{ 00:22:54.986 "params": { 00:22:54.986 "name": "Nvme2", 00:22:54.986 "trtype": "tcp", 00:22:54.986 "traddr": "10.0.0.2", 00:22:54.986 "adrfam": "ipv4", 00:22:54.986 "trsvcid": "4420", 00:22:54.986 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:54.986 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:54.986 "hdgst": false, 00:22:54.986 "ddgst": false 00:22:54.986 }, 00:22:54.986 "method": "bdev_nvme_attach_controller" 00:22:54.986 },{ 00:22:54.986 "params": { 00:22:54.986 "name": "Nvme3", 00:22:54.986 "trtype": "tcp", 00:22:54.986 "traddr": "10.0.0.2", 00:22:54.986 "adrfam": "ipv4", 00:22:54.986 "trsvcid": "4420", 00:22:54.986 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:54.986 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:54.986 "hdgst": false, 00:22:54.986 "ddgst": false 00:22:54.986 }, 00:22:54.986 "method": "bdev_nvme_attach_controller" 00:22:54.986 },{ 00:22:54.986 "params": { 00:22:54.986 "name": "Nvme4", 00:22:54.986 "trtype": "tcp", 00:22:54.986 "traddr": "10.0.0.2", 00:22:54.986 "adrfam": "ipv4", 00:22:54.986 "trsvcid": "4420", 00:22:54.986 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:54.986 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:54.986 "hdgst": false, 00:22:54.986 "ddgst": false 00:22:54.986 }, 00:22:54.986 "method": "bdev_nvme_attach_controller" 00:22:54.986 },{ 00:22:54.986 "params": { 00:22:54.986 "name": "Nvme5", 00:22:54.986 "trtype": "tcp", 00:22:54.986 "traddr": "10.0.0.2", 00:22:54.986 "adrfam": "ipv4", 00:22:54.986 "trsvcid": "4420", 00:22:54.986 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:54.986 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:54.986 "hdgst": false, 00:22:54.986 "ddgst": false 00:22:54.986 }, 00:22:54.986 "method": "bdev_nvme_attach_controller" 00:22:54.986 },{ 00:22:54.986 "params": { 00:22:54.986 "name": "Nvme6", 00:22:54.986 "trtype": "tcp", 00:22:54.986 "traddr": "10.0.0.2", 00:22:54.986 "adrfam": "ipv4", 00:22:54.986 "trsvcid": "4420", 00:22:54.986 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:54.986 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:54.986 "hdgst": false, 00:22:54.986 "ddgst": false 00:22:54.986 }, 00:22:54.986 "method": "bdev_nvme_attach_controller" 00:22:54.986 },{ 00:22:54.986 "params": { 00:22:54.986 "name": "Nvme7", 00:22:54.986 "trtype": "tcp", 00:22:54.986 "traddr": "10.0.0.2", 00:22:54.986 "adrfam": "ipv4", 00:22:54.986 "trsvcid": "4420", 00:22:54.986 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:54.986 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:54.986 "hdgst": false, 00:22:54.986 "ddgst": false 00:22:54.986 }, 00:22:54.986 "method": "bdev_nvme_attach_controller" 00:22:54.986 },{ 00:22:54.986 "params": { 00:22:54.986 "name": "Nvme8", 00:22:54.986 "trtype": "tcp", 00:22:54.986 "traddr": "10.0.0.2", 00:22:54.986 "adrfam": "ipv4", 00:22:54.986 "trsvcid": "4420", 00:22:54.986 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:54.986 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:54.986 "hdgst": false, 00:22:54.986 "ddgst": false 00:22:54.986 }, 00:22:54.986 "method": "bdev_nvme_attach_controller" 00:22:54.986 },{ 00:22:54.986 "params": { 00:22:54.986 "name": "Nvme9", 00:22:54.986 "trtype": "tcp", 00:22:54.986 "traddr": "10.0.0.2", 00:22:54.986 "adrfam": "ipv4", 00:22:54.986 "trsvcid": "4420", 00:22:54.986 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:54.986 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:54.986 "hdgst": false, 00:22:54.986 "ddgst": false 00:22:54.986 }, 00:22:54.986 "method": "bdev_nvme_attach_controller" 00:22:54.986 },{ 00:22:54.986 "params": { 00:22:54.986 "name": "Nvme10", 00:22:54.986 "trtype": "tcp", 00:22:54.986 "traddr": "10.0.0.2", 00:22:54.986 "adrfam": "ipv4", 00:22:54.986 "trsvcid": "4420", 00:22:54.986 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:54.986 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:54.986 "hdgst": false, 00:22:54.986 "ddgst": false 00:22:54.986 }, 00:22:54.986 "method": "bdev_nvme_attach_controller" 00:22:54.986 }' 00:22:54.986 EAL: No free 2048 kB hugepages reported on node 1 00:22:54.986 [2024-07-25 01:23:17.288302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.986 [2024-07-25 01:23:17.362707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.368 Running I/O for 1 seconds... 00:22:57.317 [2024-07-25 01:23:19.692595] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2624190 is same with the state(5) to be set 00:22:57.317 00:22:57.317 Latency(us) 00:22:57.318 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.318 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.318 Verification LBA range: start 0x0 length 0x400 00:22:57.318 Nvme1n1 : 1.05 243.66 15.23 0.00 0.00 260195.95 21883.33 226127.69 00:22:57.318 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.318 Verification LBA range: start 0x0 length 0x400 00:22:57.318 Nvme2n1 : 1.15 222.60 13.91 0.00 0.00 281189.29 24048.86 288130.45 00:22:57.318 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.318 Verification LBA range: start 0x0 length 0x400 00:22:57.318 Nvme3n1 : 1.14 223.89 13.99 0.00 0.00 275421.50 22567.18 288130.45 00:22:57.318 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.318 Verification LBA range: start 0x0 length 0x400 00:22:57.318 Nvme4n1 : 1.13 283.11 17.69 0.00 0.00 214518.21 21085.50 223392.28 00:22:57.318 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.318 Verification LBA range: start 0x0 length 0x400 00:22:57.318 Nvme5n1 : 1.11 233.87 14.62 0.00 0.00 254820.52 5556.31 233422.14 00:22:57.318 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.318 Verification LBA range: start 0x0 length 0x400 00:22:57.318 Nvme6n1 : 1.13 286.38 17.90 0.00 0.00 204973.60 5328.36 216097.84 00:22:57.318 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.318 Verification LBA range: start 0x0 length 0x400 00:22:57.318 Nvme7n1 : 1.11 288.51 18.03 0.00 0.00 200490.83 19717.79 215186.03 00:22:57.318 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.318 Verification LBA range: start 0x0 length 0x400 00:22:57.318 Nvme8n1 : 1.13 284.17 17.76 0.00 0.00 200795.89 21085.50 217921.45 00:22:57.318 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.318 Verification LBA range: start 0x0 length 0x400 00:22:57.318 Nvme9n1 : 1.16 276.62 17.29 0.00 0.00 203814.11 15956.59 215186.03 00:22:57.318 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.318 Verification LBA range: start 0x0 length 0x400 00:22:57.318 Nvme10n1 : 1.20 267.44 16.72 0.00 0.00 201116.09 20629.59 205156.17 00:22:57.318 =================================================================================================================== 00:22:57.318 Total : 2610.26 163.14 0.00 0.00 226423.63 5328.36 288130.45 00:22:57.577 01:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:22:57.577 01:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:57.577 01:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:57.577 01:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:57.577 01:23:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:57.577 01:23:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:57.577 01:23:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:22:57.577 01:23:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:57.577 01:23:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:22:57.577 01:23:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:57.577 01:23:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:57.577 rmmod nvme_tcp 00:22:57.577 rmmod nvme_fabrics 00:22:57.577 rmmod nvme_keyring 00:22:57.577 01:23:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:57.836 01:23:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:22:57.836 01:23:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:22:57.836 01:23:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 960426 ']' 00:22:57.836 01:23:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 960426 00:22:57.836 01:23:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 960426 ']' 00:22:57.836 01:23:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 960426 00:22:57.836 01:23:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:22:57.836 01:23:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:57.836 01:23:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 960426 00:22:57.836 01:23:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:57.836 01:23:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:57.836 01:23:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 960426' 00:22:57.836 killing process with pid 960426 00:22:57.836 01:23:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 960426 00:22:57.836 01:23:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 960426 00:22:58.096 01:23:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:58.096 01:23:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:58.096 01:23:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:58.096 01:23:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:58.096 01:23:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:58.096 01:23:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:58.096 01:23:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:58.096 01:23:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.638 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:00.638 00:23:00.638 real 0m14.959s 00:23:00.638 user 0m34.180s 00:23:00.638 sys 0m5.478s 00:23:00.638 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:00.638 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:00.639 ************************************ 00:23:00.639 END TEST nvmf_shutdown_tc1 00:23:00.639 ************************************ 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:00.639 ************************************ 00:23:00.639 START TEST nvmf_shutdown_tc2 00:23:00.639 ************************************ 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:00.639 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:00.639 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:00.639 Found net devices under 0000:86:00.0: cvl_0_0 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:00.639 Found net devices under 0000:86:00.1: cvl_0_1 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:00.639 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:00.640 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:00.640 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:00.640 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:00.640 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:00.640 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:00.640 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:00.640 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:00.640 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:00.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:00.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:23:00.640 00:23:00.640 --- 10.0.0.2 ping statistics --- 00:23:00.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.640 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:23:00.640 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:00.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:00.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.360 ms 00:23:00.640 00:23:00.640 --- 10.0.0.1 ping statistics --- 00:23:00.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.640 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:23:00.640 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:00.640 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:23:00.640 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:00.640 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:00.640 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:00.640 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:00.640 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:00.640 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:00.640 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:00.640 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:00.640 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:00.640 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:00.640 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:00.640 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:00.640 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=962207 00:23:00.640 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 962207 00:23:00.640 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 962207 ']' 00:23:00.640 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.640 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:00.640 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.640 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:00.640 01:23:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:00.640 [2024-07-25 01:23:22.968875] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:23:00.640 [2024-07-25 01:23:22.968917] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.640 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.640 [2024-07-25 01:23:23.023689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:00.640 [2024-07-25 01:23:23.103844] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:00.640 [2024-07-25 01:23:23.103881] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:00.640 [2024-07-25 01:23:23.103888] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:00.640 [2024-07-25 01:23:23.103895] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:00.640 [2024-07-25 01:23:23.103900] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:00.640 [2024-07-25 01:23:23.103947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:00.640 [2024-07-25 01:23:23.104033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:00.640 [2024-07-25 01:23:23.104142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:00.640 [2024-07-25 01:23:23.104142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:01.580 01:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:01.580 01:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:23:01.580 01:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:01.580 01:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:01.580 01:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:01.580 01:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:01.580 01:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:01.580 01:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.580 01:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:01.580 [2024-07-25 01:23:23.832132] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:01.580 01:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.580 01:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:01.580 01:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:01.580 01:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:01.580 01:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:01.580 01:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:01.580 01:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.580 01:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:01.580 01:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.580 01:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:01.580 01:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.580 01:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:01.580 01:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.580 01:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:01.580 01:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.580 01:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:01.580 01:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.580 01:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:01.580 01:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.580 01:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:01.580 01:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.580 01:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:01.580 01:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.580 01:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:01.580 01:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.580 01:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:01.580 01:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:01.580 01:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.580 01:23:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:01.580 Malloc1 00:23:01.580 [2024-07-25 01:23:23.927962] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:01.580 Malloc2 00:23:01.580 Malloc3 00:23:01.580 Malloc4 00:23:01.840 Malloc5 00:23:01.840 Malloc6 00:23:01.840 Malloc7 00:23:01.840 Malloc8 00:23:01.840 Malloc9 00:23:01.840 Malloc10 00:23:01.840 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.840 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:01.840 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:01.840 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:02.102 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=962492 00:23:02.102 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 962492 /var/tmp/bdevperf.sock 00:23:02.102 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 962492 ']' 00:23:02.102 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:02.102 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:02.102 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:02.102 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:02.102 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:02.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:02.102 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:02.102 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:23:02.102 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:02.102 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:23:02.102 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:02.102 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:02.102 { 00:23:02.102 "params": { 00:23:02.102 "name": "Nvme$subsystem", 00:23:02.102 "trtype": "$TEST_TRANSPORT", 00:23:02.102 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:02.102 "adrfam": "ipv4", 00:23:02.102 "trsvcid": "$NVMF_PORT", 00:23:02.102 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:02.102 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:02.102 "hdgst": ${hdgst:-false}, 00:23:02.102 "ddgst": ${ddgst:-false} 00:23:02.102 }, 00:23:02.102 "method": "bdev_nvme_attach_controller" 00:23:02.102 } 00:23:02.102 EOF 00:23:02.102 )") 00:23:02.102 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:02.102 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:02.102 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:02.102 { 00:23:02.102 "params": { 00:23:02.102 "name": "Nvme$subsystem", 00:23:02.102 "trtype": "$TEST_TRANSPORT", 00:23:02.102 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:02.102 "adrfam": "ipv4", 00:23:02.102 "trsvcid": "$NVMF_PORT", 00:23:02.102 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:02.102 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:02.102 "hdgst": ${hdgst:-false}, 00:23:02.102 "ddgst": ${ddgst:-false} 00:23:02.102 }, 00:23:02.102 "method": "bdev_nvme_attach_controller" 00:23:02.102 } 00:23:02.102 EOF 00:23:02.102 )") 00:23:02.102 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:02.102 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:02.102 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:02.102 { 00:23:02.102 "params": { 00:23:02.102 "name": "Nvme$subsystem", 00:23:02.102 "trtype": "$TEST_TRANSPORT", 00:23:02.102 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:02.102 "adrfam": "ipv4", 00:23:02.102 "trsvcid": "$NVMF_PORT", 00:23:02.102 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:02.102 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:02.102 "hdgst": ${hdgst:-false}, 00:23:02.102 "ddgst": ${ddgst:-false} 00:23:02.102 }, 00:23:02.102 "method": "bdev_nvme_attach_controller" 00:23:02.102 } 00:23:02.102 EOF 00:23:02.102 )") 00:23:02.102 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:02.102 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:02.102 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:02.102 { 00:23:02.102 "params": { 00:23:02.102 "name": "Nvme$subsystem", 00:23:02.102 "trtype": "$TEST_TRANSPORT", 00:23:02.102 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:02.102 "adrfam": "ipv4", 00:23:02.102 "trsvcid": "$NVMF_PORT", 00:23:02.102 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:02.102 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:02.102 "hdgst": ${hdgst:-false}, 00:23:02.103 "ddgst": ${ddgst:-false} 00:23:02.103 }, 00:23:02.103 "method": "bdev_nvme_attach_controller" 00:23:02.103 } 00:23:02.103 EOF 00:23:02.103 )") 00:23:02.103 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:02.103 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:02.103 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:02.103 { 00:23:02.103 "params": { 00:23:02.103 "name": "Nvme$subsystem", 00:23:02.103 "trtype": "$TEST_TRANSPORT", 00:23:02.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:02.103 "adrfam": "ipv4", 00:23:02.103 "trsvcid": "$NVMF_PORT", 00:23:02.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:02.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:02.103 "hdgst": ${hdgst:-false}, 00:23:02.103 "ddgst": ${ddgst:-false} 00:23:02.103 }, 00:23:02.103 "method": "bdev_nvme_attach_controller" 00:23:02.103 } 00:23:02.103 EOF 00:23:02.103 )") 00:23:02.103 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:02.103 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:02.103 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:02.103 { 00:23:02.103 "params": { 00:23:02.103 "name": "Nvme$subsystem", 00:23:02.103 "trtype": "$TEST_TRANSPORT", 00:23:02.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:02.103 "adrfam": "ipv4", 00:23:02.103 "trsvcid": "$NVMF_PORT", 00:23:02.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:02.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:02.103 "hdgst": ${hdgst:-false}, 00:23:02.103 "ddgst": ${ddgst:-false} 00:23:02.103 }, 00:23:02.103 "method": "bdev_nvme_attach_controller" 00:23:02.103 } 00:23:02.103 EOF 00:23:02.103 )") 00:23:02.103 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:02.103 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:02.103 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:02.103 { 00:23:02.103 "params": { 00:23:02.103 "name": "Nvme$subsystem", 00:23:02.103 "trtype": "$TEST_TRANSPORT", 00:23:02.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:02.103 "adrfam": "ipv4", 00:23:02.103 "trsvcid": "$NVMF_PORT", 00:23:02.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:02.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:02.103 "hdgst": ${hdgst:-false}, 00:23:02.103 "ddgst": ${ddgst:-false} 00:23:02.103 }, 00:23:02.103 "method": "bdev_nvme_attach_controller" 00:23:02.103 } 00:23:02.103 EOF 00:23:02.103 )") 00:23:02.103 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:02.103 [2024-07-25 01:23:24.401101] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:23:02.103 [2024-07-25 01:23:24.401151] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid962492 ] 00:23:02.103 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:02.103 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:02.103 { 00:23:02.103 "params": { 00:23:02.103 "name": "Nvme$subsystem", 00:23:02.103 "trtype": "$TEST_TRANSPORT", 00:23:02.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:02.103 "adrfam": "ipv4", 00:23:02.103 "trsvcid": "$NVMF_PORT", 00:23:02.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:02.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:02.103 "hdgst": ${hdgst:-false}, 00:23:02.103 "ddgst": ${ddgst:-false} 00:23:02.103 }, 00:23:02.103 "method": "bdev_nvme_attach_controller" 00:23:02.103 } 00:23:02.103 EOF 00:23:02.103 )") 00:23:02.103 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:02.103 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:02.103 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:02.103 { 00:23:02.103 "params": { 00:23:02.103 "name": "Nvme$subsystem", 00:23:02.103 "trtype": "$TEST_TRANSPORT", 00:23:02.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:02.103 "adrfam": "ipv4", 00:23:02.103 "trsvcid": "$NVMF_PORT", 00:23:02.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:02.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:02.103 "hdgst": ${hdgst:-false}, 00:23:02.103 "ddgst": ${ddgst:-false} 00:23:02.103 }, 00:23:02.103 "method": "bdev_nvme_attach_controller" 00:23:02.103 } 00:23:02.103 EOF 00:23:02.103 )") 00:23:02.103 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:02.103 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:02.103 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:02.103 { 00:23:02.103 "params": { 00:23:02.103 "name": "Nvme$subsystem", 00:23:02.103 "trtype": "$TEST_TRANSPORT", 00:23:02.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:02.103 "adrfam": "ipv4", 00:23:02.103 "trsvcid": "$NVMF_PORT", 00:23:02.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:02.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:02.103 "hdgst": ${hdgst:-false}, 00:23:02.103 "ddgst": ${ddgst:-false} 00:23:02.103 }, 00:23:02.103 "method": "bdev_nvme_attach_controller" 00:23:02.103 } 00:23:02.103 EOF 00:23:02.103 )") 00:23:02.103 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:02.103 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:23:02.103 EAL: No free 2048 kB hugepages reported on node 1 00:23:02.103 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:23:02.103 01:23:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:02.103 "params": { 00:23:02.103 "name": "Nvme1", 00:23:02.103 "trtype": "tcp", 00:23:02.103 "traddr": "10.0.0.2", 00:23:02.103 "adrfam": "ipv4", 00:23:02.103 "trsvcid": "4420", 00:23:02.103 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:02.103 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:02.103 "hdgst": false, 00:23:02.103 "ddgst": false 00:23:02.103 }, 00:23:02.103 "method": "bdev_nvme_attach_controller" 00:23:02.103 },{ 00:23:02.103 "params": { 00:23:02.103 "name": "Nvme2", 00:23:02.103 "trtype": "tcp", 00:23:02.103 "traddr": "10.0.0.2", 00:23:02.103 "adrfam": "ipv4", 00:23:02.103 "trsvcid": "4420", 00:23:02.103 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:02.103 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:02.103 "hdgst": false, 00:23:02.103 "ddgst": false 00:23:02.103 }, 00:23:02.103 "method": "bdev_nvme_attach_controller" 00:23:02.103 },{ 00:23:02.103 "params": { 00:23:02.103 "name": "Nvme3", 00:23:02.103 "trtype": "tcp", 00:23:02.103 "traddr": "10.0.0.2", 00:23:02.103 "adrfam": "ipv4", 00:23:02.103 "trsvcid": "4420", 00:23:02.103 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:02.103 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:02.103 "hdgst": false, 00:23:02.103 "ddgst": false 00:23:02.103 }, 00:23:02.103 "method": "bdev_nvme_attach_controller" 00:23:02.103 },{ 00:23:02.103 "params": { 00:23:02.103 "name": "Nvme4", 00:23:02.103 "trtype": "tcp", 00:23:02.103 "traddr": "10.0.0.2", 00:23:02.103 "adrfam": "ipv4", 00:23:02.103 "trsvcid": "4420", 00:23:02.103 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:02.103 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:02.103 "hdgst": false, 00:23:02.103 "ddgst": false 00:23:02.103 }, 00:23:02.103 "method": "bdev_nvme_attach_controller" 00:23:02.103 },{ 00:23:02.103 "params": { 00:23:02.103 "name": "Nvme5", 00:23:02.103 "trtype": "tcp", 00:23:02.103 "traddr": "10.0.0.2", 00:23:02.103 "adrfam": "ipv4", 00:23:02.103 "trsvcid": "4420", 00:23:02.103 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:02.103 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:02.103 "hdgst": false, 00:23:02.103 "ddgst": false 00:23:02.103 }, 00:23:02.103 "method": "bdev_nvme_attach_controller" 00:23:02.103 },{ 00:23:02.103 "params": { 00:23:02.103 "name": "Nvme6", 00:23:02.103 "trtype": "tcp", 00:23:02.103 "traddr": "10.0.0.2", 00:23:02.103 "adrfam": "ipv4", 00:23:02.103 "trsvcid": "4420", 00:23:02.103 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:02.103 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:02.103 "hdgst": false, 00:23:02.103 "ddgst": false 00:23:02.103 }, 00:23:02.103 "method": "bdev_nvme_attach_controller" 00:23:02.103 },{ 00:23:02.103 "params": { 00:23:02.103 "name": "Nvme7", 00:23:02.103 "trtype": "tcp", 00:23:02.103 "traddr": "10.0.0.2", 00:23:02.103 "adrfam": "ipv4", 00:23:02.103 "trsvcid": "4420", 00:23:02.103 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:02.103 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:02.103 "hdgst": false, 00:23:02.103 "ddgst": false 00:23:02.103 }, 00:23:02.103 "method": "bdev_nvme_attach_controller" 00:23:02.103 },{ 00:23:02.103 "params": { 00:23:02.103 "name": "Nvme8", 00:23:02.103 "trtype": "tcp", 00:23:02.104 "traddr": "10.0.0.2", 00:23:02.104 "adrfam": "ipv4", 00:23:02.104 "trsvcid": "4420", 00:23:02.104 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:02.104 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:02.104 "hdgst": false, 00:23:02.104 "ddgst": false 00:23:02.104 }, 00:23:02.104 "method": "bdev_nvme_attach_controller" 00:23:02.104 },{ 00:23:02.104 "params": { 00:23:02.104 "name": "Nvme9", 00:23:02.104 "trtype": "tcp", 00:23:02.104 "traddr": "10.0.0.2", 00:23:02.104 "adrfam": "ipv4", 00:23:02.104 "trsvcid": "4420", 00:23:02.104 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:02.104 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:02.104 "hdgst": false, 00:23:02.104 "ddgst": false 00:23:02.104 }, 00:23:02.104 "method": "bdev_nvme_attach_controller" 00:23:02.104 },{ 00:23:02.104 "params": { 00:23:02.104 "name": "Nvme10", 00:23:02.104 "trtype": "tcp", 00:23:02.104 "traddr": "10.0.0.2", 00:23:02.104 "adrfam": "ipv4", 00:23:02.104 "trsvcid": "4420", 00:23:02.104 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:02.104 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:02.104 "hdgst": false, 00:23:02.104 "ddgst": false 00:23:02.104 }, 00:23:02.104 "method": "bdev_nvme_attach_controller" 00:23:02.104 }' 00:23:02.104 [2024-07-25 01:23:24.457121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.104 [2024-07-25 01:23:24.529593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:04.047 Running I/O for 10 seconds... 00:23:04.047 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:04.047 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:23:04.047 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:04.047 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.047 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:04.047 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.047 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:04.047 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:04.047 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:04.047 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:23:04.047 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:23:04.047 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:04.047 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:04.047 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:04.047 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:04.047 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.047 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:04.047 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.047 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:04.047 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:04.047 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:04.307 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:04.307 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:04.307 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:04.307 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:04.307 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.307 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:04.307 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.307 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:04.307 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:04.307 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:04.566 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:04.566 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:04.566 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:04.566 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:04.566 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.566 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:04.566 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.566 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:04.566 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:04.566 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:23:04.566 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:23:04.566 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:23:04.566 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 962492 00:23:04.566 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 962492 ']' 00:23:04.566 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 962492 00:23:04.566 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:23:04.566 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:04.566 01:23:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 962492 00:23:04.566 01:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:04.566 01:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:04.566 01:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 962492' 00:23:04.566 killing process with pid 962492 00:23:04.566 01:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 962492 00:23:04.566 01:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 962492 00:23:04.826 Received shutdown signal, test time was about 0.954123 seconds 00:23:04.826 00:23:04.826 Latency(us) 00:23:04.826 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.826 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.826 Verification LBA range: start 0x0 length 0x400 00:23:04.826 Nvme1n1 : 0.90 212.83 13.30 0.00 0.00 297658.40 24732.72 269894.34 00:23:04.826 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.826 Verification LBA range: start 0x0 length 0x400 00:23:04.826 Nvme2n1 : 0.89 286.10 17.88 0.00 0.00 216457.35 21199.47 218833.25 00:23:04.826 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.826 Verification LBA range: start 0x0 length 0x400 00:23:04.826 Nvme3n1 : 0.88 290.43 18.15 0.00 0.00 209979.66 21655.37 217009.64 00:23:04.826 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.826 Verification LBA range: start 0x0 length 0x400 00:23:04.826 Nvme4n1 : 0.87 293.11 18.32 0.00 0.00 202996.42 21199.47 216097.84 00:23:04.826 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.826 Verification LBA range: start 0x0 length 0x400 00:23:04.826 Nvme5n1 : 0.90 214.40 13.40 0.00 0.00 273001.66 22681.15 237069.36 00:23:04.826 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.826 Verification LBA range: start 0x0 length 0x400 00:23:04.826 Nvme6n1 : 0.92 278.92 17.43 0.00 0.00 207332.84 17552.25 237069.36 00:23:04.826 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.826 Verification LBA range: start 0x0 length 0x400 00:23:04.826 Nvme7n1 : 0.91 280.17 17.51 0.00 0.00 202351.08 17096.35 222480.47 00:23:04.826 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.826 Verification LBA range: start 0x0 length 0x400 00:23:04.826 Nvme8n1 : 0.89 286.59 17.91 0.00 0.00 193309.38 24846.69 207891.59 00:23:04.826 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.826 Verification LBA range: start 0x0 length 0x400 00:23:04.826 Nvme9n1 : 0.89 214.77 13.42 0.00 0.00 251342.43 22795.13 255305.46 00:23:04.826 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.826 Verification LBA range: start 0x0 length 0x400 00:23:04.826 Nvme10n1 : 0.95 201.37 12.59 0.00 0.00 254143.15 25530.55 275365.18 00:23:04.826 =================================================================================================================== 00:23:04.826 Total : 2558.71 159.92 0.00 0.00 226615.11 17096.35 275365.18 00:23:04.826 01:23:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:23:06.206 01:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 962207 00:23:06.206 01:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:23:06.206 01:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:06.206 01:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:06.206 01:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:06.206 01:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:06.206 01:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:06.206 01:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:23:06.206 01:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:06.206 01:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:23:06.206 01:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:06.206 01:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:06.206 rmmod nvme_tcp 00:23:06.206 rmmod nvme_fabrics 00:23:06.206 rmmod nvme_keyring 00:23:06.206 01:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:06.206 01:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:23:06.206 01:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:23:06.206 01:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 962207 ']' 00:23:06.206 01:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 962207 00:23:06.206 01:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 962207 ']' 00:23:06.206 01:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 962207 00:23:06.206 01:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:23:06.206 01:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:06.206 01:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 962207 00:23:06.206 01:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:06.206 01:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:06.206 01:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 962207' 00:23:06.206 killing process with pid 962207 00:23:06.206 01:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 962207 00:23:06.206 01:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 962207 00:23:06.466 01:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:06.466 01:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:06.466 01:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:06.466 01:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:06.466 01:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:06.466 01:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.466 01:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:06.466 01:23:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.010 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:09.010 00:23:09.010 real 0m8.247s 00:23:09.010 user 0m25.602s 00:23:09.010 sys 0m1.387s 00:23:09.010 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:09.010 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:09.010 ************************************ 00:23:09.010 END TEST nvmf_shutdown_tc2 00:23:09.010 ************************************ 00:23:09.010 01:23:30 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:09.010 01:23:30 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:09.010 01:23:30 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:09.010 01:23:30 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:09.010 01:23:30 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:09.010 ************************************ 00:23:09.010 START TEST nvmf_shutdown_tc3 00:23:09.010 ************************************ 00:23:09.010 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:23:09.010 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:23:09.010 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:09.010 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:09.010 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:09.010 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:09.010 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:09.010 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:09.010 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.010 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:09.010 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:09.011 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:09.011 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:09.011 Found net devices under 0000:86:00.0: cvl_0_0 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:09.011 Found net devices under 0000:86:00.1: cvl_0_1 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:09.011 01:23:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:09.011 01:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:09.011 01:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:09.011 01:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:09.011 01:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:09.011 01:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:09.011 01:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:09.011 01:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:09.011 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:09.011 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:23:09.011 00:23:09.011 --- 10.0.0.2 ping statistics --- 00:23:09.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.011 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:23:09.011 01:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:09.011 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:09.011 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:23:09.011 00:23:09.011 --- 10.0.0.1 ping statistics --- 00:23:09.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.011 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:23:09.011 01:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:09.011 01:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:23:09.012 01:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:09.012 01:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:09.012 01:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:09.012 01:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:09.012 01:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:09.012 01:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:09.012 01:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:09.012 01:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:09.012 01:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:09.012 01:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:09.012 01:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:09.012 01:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=963755 00:23:09.012 01:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 963755 00:23:09.012 01:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:09.012 01:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 963755 ']' 00:23:09.012 01:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.012 01:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:09.012 01:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.012 01:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:09.012 01:23:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:09.012 [2024-07-25 01:23:31.323100] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:23:09.012 [2024-07-25 01:23:31.323145] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:09.012 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.012 [2024-07-25 01:23:31.381529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:09.012 [2024-07-25 01:23:31.453794] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:09.012 [2024-07-25 01:23:31.453836] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:09.012 [2024-07-25 01:23:31.453843] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:09.012 [2024-07-25 01:23:31.453848] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:09.012 [2024-07-25 01:23:31.453853] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:09.012 [2024-07-25 01:23:31.453964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:09.012 [2024-07-25 01:23:31.454060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:09.012 [2024-07-25 01:23:31.454151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.012 [2024-07-25 01:23:31.454152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:09.951 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:09.951 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:23:09.951 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:09.951 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:09.951 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:09.951 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:09.951 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:09.951 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.951 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:09.951 [2024-07-25 01:23:32.173990] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:09.951 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.951 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:09.951 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:09.951 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:09.951 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:09.951 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:09.951 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.951 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:09.951 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.951 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:09.951 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.951 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:09.951 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.951 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:09.951 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.951 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:09.951 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.951 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:09.951 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.951 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:09.951 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.951 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:09.951 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.951 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:09.951 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.951 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:09.951 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:09.951 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.951 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:09.951 Malloc1 00:23:09.951 [2024-07-25 01:23:32.269667] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:09.951 Malloc2 00:23:09.951 Malloc3 00:23:09.951 Malloc4 00:23:09.951 Malloc5 00:23:10.211 Malloc6 00:23:10.211 Malloc7 00:23:10.211 Malloc8 00:23:10.211 Malloc9 00:23:10.211 Malloc10 00:23:10.211 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.211 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:10.211 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:10.211 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:10.211 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=964037 00:23:10.211 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 964037 /var/tmp/bdevperf.sock 00:23:10.211 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 964037 ']' 00:23:10.211 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:10.211 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:10.211 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:10.211 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:10.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:10.211 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:10.211 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:10.211 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:23:10.211 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:10.211 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:23:10.211 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.211 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.211 { 00:23:10.211 "params": { 00:23:10.211 "name": "Nvme$subsystem", 00:23:10.211 "trtype": "$TEST_TRANSPORT", 00:23:10.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.211 "adrfam": "ipv4", 00:23:10.211 "trsvcid": "$NVMF_PORT", 00:23:10.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.211 "hdgst": ${hdgst:-false}, 00:23:10.211 "ddgst": ${ddgst:-false} 00:23:10.211 }, 00:23:10.211 "method": "bdev_nvme_attach_controller" 00:23:10.211 } 00:23:10.211 EOF 00:23:10.211 )") 00:23:10.211 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:10.211 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.472 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.472 { 00:23:10.472 "params": { 00:23:10.472 "name": "Nvme$subsystem", 00:23:10.472 "trtype": "$TEST_TRANSPORT", 00:23:10.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.472 "adrfam": "ipv4", 00:23:10.472 "trsvcid": "$NVMF_PORT", 00:23:10.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.472 "hdgst": ${hdgst:-false}, 00:23:10.472 "ddgst": ${ddgst:-false} 00:23:10.472 }, 00:23:10.472 "method": "bdev_nvme_attach_controller" 00:23:10.472 } 00:23:10.472 EOF 00:23:10.472 )") 00:23:10.472 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:10.472 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.472 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.472 { 00:23:10.472 "params": { 00:23:10.472 "name": "Nvme$subsystem", 00:23:10.472 "trtype": "$TEST_TRANSPORT", 00:23:10.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.472 "adrfam": "ipv4", 00:23:10.472 "trsvcid": "$NVMF_PORT", 00:23:10.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.472 "hdgst": ${hdgst:-false}, 00:23:10.472 "ddgst": ${ddgst:-false} 00:23:10.472 }, 00:23:10.472 "method": "bdev_nvme_attach_controller" 00:23:10.472 } 00:23:10.472 EOF 00:23:10.472 )") 00:23:10.472 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:10.472 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.472 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.472 { 00:23:10.472 "params": { 00:23:10.472 "name": "Nvme$subsystem", 00:23:10.472 "trtype": "$TEST_TRANSPORT", 00:23:10.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.472 "adrfam": "ipv4", 00:23:10.472 "trsvcid": "$NVMF_PORT", 00:23:10.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.472 "hdgst": ${hdgst:-false}, 00:23:10.472 "ddgst": ${ddgst:-false} 00:23:10.472 }, 00:23:10.472 "method": "bdev_nvme_attach_controller" 00:23:10.472 } 00:23:10.472 EOF 00:23:10.472 )") 00:23:10.472 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:10.472 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.472 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.472 { 00:23:10.472 "params": { 00:23:10.472 "name": "Nvme$subsystem", 00:23:10.472 "trtype": "$TEST_TRANSPORT", 00:23:10.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.472 "adrfam": "ipv4", 00:23:10.472 "trsvcid": "$NVMF_PORT", 00:23:10.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.472 "hdgst": ${hdgst:-false}, 00:23:10.472 "ddgst": ${ddgst:-false} 00:23:10.472 }, 00:23:10.472 "method": "bdev_nvme_attach_controller" 00:23:10.472 } 00:23:10.472 EOF 00:23:10.472 )") 00:23:10.472 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:10.472 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.472 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.472 { 00:23:10.472 "params": { 00:23:10.472 "name": "Nvme$subsystem", 00:23:10.472 "trtype": "$TEST_TRANSPORT", 00:23:10.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.472 "adrfam": "ipv4", 00:23:10.472 "trsvcid": "$NVMF_PORT", 00:23:10.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.472 "hdgst": ${hdgst:-false}, 00:23:10.472 "ddgst": ${ddgst:-false} 00:23:10.472 }, 00:23:10.472 "method": "bdev_nvme_attach_controller" 00:23:10.472 } 00:23:10.472 EOF 00:23:10.472 )") 00:23:10.472 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:10.472 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.472 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.473 { 00:23:10.473 "params": { 00:23:10.473 "name": "Nvme$subsystem", 00:23:10.473 "trtype": "$TEST_TRANSPORT", 00:23:10.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.473 "adrfam": "ipv4", 00:23:10.473 "trsvcid": "$NVMF_PORT", 00:23:10.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.473 "hdgst": ${hdgst:-false}, 00:23:10.473 "ddgst": ${ddgst:-false} 00:23:10.473 }, 00:23:10.473 "method": "bdev_nvme_attach_controller" 00:23:10.473 } 00:23:10.473 EOF 00:23:10.473 )") 00:23:10.473 [2024-07-25 01:23:32.737324] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:23:10.473 [2024-07-25 01:23:32.737376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid964037 ] 00:23:10.473 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:10.473 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.473 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.473 { 00:23:10.473 "params": { 00:23:10.473 "name": "Nvme$subsystem", 00:23:10.473 "trtype": "$TEST_TRANSPORT", 00:23:10.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.473 "adrfam": "ipv4", 00:23:10.473 "trsvcid": "$NVMF_PORT", 00:23:10.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.473 "hdgst": ${hdgst:-false}, 00:23:10.473 "ddgst": ${ddgst:-false} 00:23:10.473 }, 00:23:10.473 "method": "bdev_nvme_attach_controller" 00:23:10.473 } 00:23:10.473 EOF 00:23:10.473 )") 00:23:10.473 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:10.473 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.473 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.473 { 00:23:10.473 "params": { 00:23:10.473 "name": "Nvme$subsystem", 00:23:10.473 "trtype": "$TEST_TRANSPORT", 00:23:10.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.473 "adrfam": "ipv4", 00:23:10.473 "trsvcid": "$NVMF_PORT", 00:23:10.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.473 "hdgst": ${hdgst:-false}, 00:23:10.473 "ddgst": ${ddgst:-false} 00:23:10.473 }, 00:23:10.473 "method": "bdev_nvme_attach_controller" 00:23:10.473 } 00:23:10.473 EOF 00:23:10.473 )") 00:23:10.473 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:10.473 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.473 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.473 { 00:23:10.473 "params": { 00:23:10.473 "name": "Nvme$subsystem", 00:23:10.473 "trtype": "$TEST_TRANSPORT", 00:23:10.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.473 "adrfam": "ipv4", 00:23:10.473 "trsvcid": "$NVMF_PORT", 00:23:10.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.473 "hdgst": ${hdgst:-false}, 00:23:10.473 "ddgst": ${ddgst:-false} 00:23:10.473 }, 00:23:10.473 "method": "bdev_nvme_attach_controller" 00:23:10.473 } 00:23:10.473 EOF 00:23:10.473 )") 00:23:10.473 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:10.473 EAL: No free 2048 kB hugepages reported on node 1 00:23:10.473 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:23:10.473 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:23:10.473 01:23:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:10.473 "params": { 00:23:10.473 "name": "Nvme1", 00:23:10.473 "trtype": "tcp", 00:23:10.473 "traddr": "10.0.0.2", 00:23:10.473 "adrfam": "ipv4", 00:23:10.473 "trsvcid": "4420", 00:23:10.473 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.473 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:10.473 "hdgst": false, 00:23:10.473 "ddgst": false 00:23:10.473 }, 00:23:10.473 "method": "bdev_nvme_attach_controller" 00:23:10.473 },{ 00:23:10.473 "params": { 00:23:10.473 "name": "Nvme2", 00:23:10.473 "trtype": "tcp", 00:23:10.473 "traddr": "10.0.0.2", 00:23:10.473 "adrfam": "ipv4", 00:23:10.473 "trsvcid": "4420", 00:23:10.473 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:10.473 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:10.473 "hdgst": false, 00:23:10.473 "ddgst": false 00:23:10.473 }, 00:23:10.473 "method": "bdev_nvme_attach_controller" 00:23:10.473 },{ 00:23:10.473 "params": { 00:23:10.473 "name": "Nvme3", 00:23:10.473 "trtype": "tcp", 00:23:10.473 "traddr": "10.0.0.2", 00:23:10.473 "adrfam": "ipv4", 00:23:10.473 "trsvcid": "4420", 00:23:10.473 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:10.473 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:10.473 "hdgst": false, 00:23:10.473 "ddgst": false 00:23:10.473 }, 00:23:10.473 "method": "bdev_nvme_attach_controller" 00:23:10.473 },{ 00:23:10.473 "params": { 00:23:10.473 "name": "Nvme4", 00:23:10.473 "trtype": "tcp", 00:23:10.473 "traddr": "10.0.0.2", 00:23:10.473 "adrfam": "ipv4", 00:23:10.473 "trsvcid": "4420", 00:23:10.473 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:10.473 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:10.473 "hdgst": false, 00:23:10.473 "ddgst": false 00:23:10.473 }, 00:23:10.473 "method": "bdev_nvme_attach_controller" 00:23:10.473 },{ 00:23:10.473 "params": { 00:23:10.473 "name": "Nvme5", 00:23:10.473 "trtype": "tcp", 00:23:10.473 "traddr": "10.0.0.2", 00:23:10.473 "adrfam": "ipv4", 00:23:10.473 "trsvcid": "4420", 00:23:10.473 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:10.473 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:10.473 "hdgst": false, 00:23:10.473 "ddgst": false 00:23:10.473 }, 00:23:10.473 "method": "bdev_nvme_attach_controller" 00:23:10.473 },{ 00:23:10.473 "params": { 00:23:10.473 "name": "Nvme6", 00:23:10.473 "trtype": "tcp", 00:23:10.473 "traddr": "10.0.0.2", 00:23:10.473 "adrfam": "ipv4", 00:23:10.473 "trsvcid": "4420", 00:23:10.473 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:10.473 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:10.473 "hdgst": false, 00:23:10.473 "ddgst": false 00:23:10.473 }, 00:23:10.473 "method": "bdev_nvme_attach_controller" 00:23:10.473 },{ 00:23:10.473 "params": { 00:23:10.473 "name": "Nvme7", 00:23:10.473 "trtype": "tcp", 00:23:10.473 "traddr": "10.0.0.2", 00:23:10.473 "adrfam": "ipv4", 00:23:10.473 "trsvcid": "4420", 00:23:10.473 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:10.473 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:10.473 "hdgst": false, 00:23:10.473 "ddgst": false 00:23:10.473 }, 00:23:10.473 "method": "bdev_nvme_attach_controller" 00:23:10.473 },{ 00:23:10.473 "params": { 00:23:10.473 "name": "Nvme8", 00:23:10.473 "trtype": "tcp", 00:23:10.473 "traddr": "10.0.0.2", 00:23:10.473 "adrfam": "ipv4", 00:23:10.473 "trsvcid": "4420", 00:23:10.473 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:10.473 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:10.473 "hdgst": false, 00:23:10.473 "ddgst": false 00:23:10.473 }, 00:23:10.473 "method": "bdev_nvme_attach_controller" 00:23:10.473 },{ 00:23:10.473 "params": { 00:23:10.473 "name": "Nvme9", 00:23:10.473 "trtype": "tcp", 00:23:10.473 "traddr": "10.0.0.2", 00:23:10.473 "adrfam": "ipv4", 00:23:10.473 "trsvcid": "4420", 00:23:10.473 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:10.473 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:10.473 "hdgst": false, 00:23:10.473 "ddgst": false 00:23:10.473 }, 00:23:10.473 "method": "bdev_nvme_attach_controller" 00:23:10.473 },{ 00:23:10.473 "params": { 00:23:10.473 "name": "Nvme10", 00:23:10.473 "trtype": "tcp", 00:23:10.473 "traddr": "10.0.0.2", 00:23:10.473 "adrfam": "ipv4", 00:23:10.473 "trsvcid": "4420", 00:23:10.473 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:10.473 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:10.473 "hdgst": false, 00:23:10.473 "ddgst": false 00:23:10.473 }, 00:23:10.473 "method": "bdev_nvme_attach_controller" 00:23:10.473 }' 00:23:10.473 [2024-07-25 01:23:32.793281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.473 [2024-07-25 01:23:32.867442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:12.377 Running I/O for 10 seconds... 00:23:12.962 01:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:12.962 01:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:23:12.962 01:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:12.962 01:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.962 01:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:12.962 01:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.962 01:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:12.962 01:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:12.962 01:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:12.962 01:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:12.962 01:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:23:12.962 01:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:23:12.962 01:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:12.962 01:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:12.962 01:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:12.962 01:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:12.962 01:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.962 01:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:12.962 01:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.962 01:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:12.962 01:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:12.962 01:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:23:12.962 01:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:23:12.962 01:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:23:12.962 01:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 963755 00:23:12.962 01:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 963755 ']' 00:23:12.962 01:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 963755 00:23:12.962 01:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:23:12.962 01:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:12.962 01:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 963755 00:23:12.962 01:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:12.962 01:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:12.962 01:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 963755' 00:23:12.962 killing process with pid 963755 00:23:12.962 01:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 963755 00:23:12.962 01:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 963755 00:23:12.962 [2024-07-25 01:23:35.382311] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1683330 is same with the state(5) to be set 00:23:12.962 [2024-07-25 01:23:35.383179] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.962 [2024-07-25 01:23:35.383209] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.962 [2024-07-25 01:23:35.383217] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.962 [2024-07-25 01:23:35.383224] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.962 [2024-07-25 01:23:35.383231] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.962 [2024-07-25 01:23:35.383238] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.962 [2024-07-25 01:23:35.383246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.962 [2024-07-25 01:23:35.383252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.962 [2024-07-25 01:23:35.383258] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.962 [2024-07-25 01:23:35.383264] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.962 [2024-07-25 01:23:35.383271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383278] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383289] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383301] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383327] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383333] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383339] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383345] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383351] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383357] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383364] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383370] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383376] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383382] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383388] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383394] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383400] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383406] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383412] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383431] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383438] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383444] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383457] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383464] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383476] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383482] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383497] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383503] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383520] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383527] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383533] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383539] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383545] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383551] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383557] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383562] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383586] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383592] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.383598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685da0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.384489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16837e0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.384500] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16837e0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.385343] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1683cb0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.385372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1683cb0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.386146] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684160 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.387252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.387267] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.387273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.387280] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.387288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.387294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.387301] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.387308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.387314] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.387320] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.387326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.387333] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.387339] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.387346] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.387353] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.387359] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.387365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.387371] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.387377] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.387384] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.387391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.387397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.387403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.387410] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.387416] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.963 [2024-07-25 01:23:35.387425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.387432] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.387437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.387445] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.387452] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.387458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.387464] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.387471] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.387476] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.387483] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.387489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.387496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.387504] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.387515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.387522] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.387529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.387535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.387541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.387547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.387553] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.387559] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.387565] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.387573] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.387579] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.387584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.387591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.387597] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.387604] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.387611] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.387617] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.387623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.387629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.387635] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.387640] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.387646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.387652] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.387659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.387665] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684ac0 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388657] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388672] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388680] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388686] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388693] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388699] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388706] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388713] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388719] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388725] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388732] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388744] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388770] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388782] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388788] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388806] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388812] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388829] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388841] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388854] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388865] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388878] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388890] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388901] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388907] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388913] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388924] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388931] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388938] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.964 [2024-07-25 01:23:35.388949] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.388955] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.388961] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.388966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.388972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.388978] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.388984] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1684f90 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.389719] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.389734] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.389741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.389747] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.389753] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.389760] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.389766] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.389772] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.389778] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.389784] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.389790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.389796] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.389802] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.389808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.389815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.389821] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.389827] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.389833] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.389842] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.389848] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.389855] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.389862] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.389868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.389875] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.389881] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.389887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.389893] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.389899] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.389905] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.389911] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.389918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.389924] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.389930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.389936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.389942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.389948] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.389954] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.389960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.389966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.389972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.389979] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.389985] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.389991] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.389996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.390002] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.390010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.390016] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.390023] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.390028] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.390034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.390040] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.390052] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.390058] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.390064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.390071] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.390078] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.390084] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.390090] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.390096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.390103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.390109] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.390115] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.390121] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685440 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.396461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.965 [2024-07-25 01:23:35.396491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.965 [2024-07-25 01:23:35.396501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.965 [2024-07-25 01:23:35.396509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.965 [2024-07-25 01:23:35.396516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.965 [2024-07-25 01:23:35.396523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.965 [2024-07-25 01:23:35.396531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.965 [2024-07-25 01:23:35.396537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.965 [2024-07-25 01:23:35.396548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd108d0 is same with the state(5) to be set 00:23:12.965 [2024-07-25 01:23:35.396576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.966 [2024-07-25 01:23:35.396584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.966 [2024-07-25 01:23:35.396591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.966 [2024-07-25 01:23:35.396598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.966 [2024-07-25 01:23:35.396605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.966 [2024-07-25 01:23:35.396612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.966 [2024-07-25 01:23:35.396619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.966 [2024-07-25 01:23:35.396625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.966 [2024-07-25 01:23:35.396632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88c50 is same with the state(5) to be set 00:23:12.966 [2024-07-25 01:23:35.396656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.966 [2024-07-25 01:23:35.396664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.966 [2024-07-25 01:23:35.396672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.966 [2024-07-25 01:23:35.396678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.966 [2024-07-25 01:23:35.396686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.966 [2024-07-25 01:23:35.396692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.966 [2024-07-25 01:23:35.396699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.966 [2024-07-25 01:23:35.396705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.966 [2024-07-25 01:23:35.396712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4630 is same with the state(5) to be set 00:23:12.966 [2024-07-25 01:23:35.396733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.966 [2024-07-25 01:23:35.396741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.966 [2024-07-25 01:23:35.396748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.966 [2024-07-25 01:23:35.396755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.966 [2024-07-25 01:23:35.396762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.966 [2024-07-25 01:23:35.396768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.966 [2024-07-25 01:23:35.396775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.966 [2024-07-25 01:23:35.396784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.966 [2024-07-25 01:23:35.396791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd18610 is same with the state(5) to be set 00:23:12.966 [2024-07-25 01:23:35.396813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.966 [2024-07-25 01:23:35.396821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.966 [2024-07-25 01:23:35.396828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.966 [2024-07-25 01:23:35.396835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.966 [2024-07-25 01:23:35.396841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.966 [2024-07-25 01:23:35.396848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.966 [2024-07-25 01:23:35.396855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.966 [2024-07-25 01:23:35.396861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.966 [2024-07-25 01:23:35.396868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce3780 is same with the state(5) to be set 00:23:12.966 [2024-07-25 01:23:35.396891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.966 [2024-07-25 01:23:35.396899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.966 [2024-07-25 01:23:35.396906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.966 [2024-07-25 01:23:35.396913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.966 [2024-07-25 01:23:35.396920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.966 [2024-07-25 01:23:35.396926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.966 [2024-07-25 01:23:35.396933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.966 [2024-07-25 01:23:35.396939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.966 [2024-07-25 01:23:35.396945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x693340 is same with the state(5) to be set 00:23:12.966 [2024-07-25 01:23:35.396967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.966 [2024-07-25 01:23:35.396974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.966 [2024-07-25 01:23:35.396982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.966 [2024-07-25 01:23:35.396988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.966 [2024-07-25 01:23:35.396995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.966 [2024-07-25 01:23:35.397004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.966 [2024-07-25 01:23:35.397012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.966 [2024-07-25 01:23:35.397019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.966 [2024-07-25 01:23:35.397025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb676b0 is same with the state(5) to be set 00:23:12.966 [2024-07-25 01:23:35.397054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.966 [2024-07-25 01:23:35.397064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.966 [2024-07-25 01:23:35.397073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.966 [2024-07-25 01:23:35.397079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.966 [2024-07-25 01:23:35.397087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.966 [2024-07-25 01:23:35.397094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.966 [2024-07-25 01:23:35.397101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.966 [2024-07-25 01:23:35.397108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.966 [2024-07-25 01:23:35.397114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb44c70 is same with the state(5) to be set 00:23:12.966 [2024-07-25 01:23:35.397134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.966 [2024-07-25 01:23:35.397145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.966 [2024-07-25 01:23:35.397154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.966 [2024-07-25 01:23:35.397161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.966 [2024-07-25 01:23:35.397168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.966 [2024-07-25 01:23:35.397176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.966 [2024-07-25 01:23:35.397183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.966 [2024-07-25 01:23:35.397189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.966 [2024-07-25 01:23:35.397196] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2a60 is same with the state(5) to be set 00:23:12.966 [2024-07-25 01:23:35.397219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.966 [2024-07-25 01:23:35.397229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.966 [2024-07-25 01:23:35.397237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.966 [2024-07-25 01:23:35.397244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.966 [2024-07-25 01:23:35.397254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.966 [2024-07-25 01:23:35.397261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.966 [2024-07-25 01:23:35.397268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.967 [2024-07-25 01:23:35.397276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.967 [2024-07-25 01:23:35.397283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8dc50 is same with the state(5) to be set 00:23:12.967 [2024-07-25 01:23:35.398474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.967 [2024-07-25 01:23:35.398498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.967 [2024-07-25 01:23:35.398513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.967 [2024-07-25 01:23:35.398520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.967 [2024-07-25 01:23:35.398529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.967 [2024-07-25 01:23:35.398536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.967 [2024-07-25 01:23:35.398546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.967 [2024-07-25 01:23:35.398554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.967 [2024-07-25 01:23:35.398564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.967 [2024-07-25 01:23:35.398570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.967 [2024-07-25 01:23:35.398580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.967 [2024-07-25 01:23:35.398586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.967 [2024-07-25 01:23:35.398595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.967 [2024-07-25 01:23:35.398603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.967 [2024-07-25 01:23:35.398611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.967 [2024-07-25 01:23:35.398618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.967 [2024-07-25 01:23:35.398626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.967 [2024-07-25 01:23:35.398633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.967 [2024-07-25 01:23:35.398641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.967 [2024-07-25 01:23:35.398647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.967 [2024-07-25 01:23:35.398660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.967 [2024-07-25 01:23:35.398667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.967 [2024-07-25 01:23:35.398675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.967 [2024-07-25 01:23:35.398682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.967 [2024-07-25 01:23:35.398691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.967 [2024-07-25 01:23:35.398698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.967 [2024-07-25 01:23:35.398707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.967 [2024-07-25 01:23:35.398713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.967 [2024-07-25 01:23:35.398722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.967 [2024-07-25 01:23:35.398729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.967 [2024-07-25 01:23:35.398738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.967 [2024-07-25 01:23:35.398745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.967 [2024-07-25 01:23:35.398754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.967 [2024-07-25 01:23:35.398761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.967 [2024-07-25 01:23:35.398770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.967 [2024-07-25 01:23:35.398776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.967 [2024-07-25 01:23:35.398784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.967 [2024-07-25 01:23:35.398791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.967 [2024-07-25 01:23:35.398799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.967 [2024-07-25 01:23:35.398806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.967 [2024-07-25 01:23:35.398815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.967 [2024-07-25 01:23:35.398821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.967 [2024-07-25 01:23:35.398830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.967 [2024-07-25 01:23:35.398838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.967 [2024-07-25 01:23:35.398846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.967 [2024-07-25 01:23:35.398854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.967 [2024-07-25 01:23:35.398863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.967 [2024-07-25 01:23:35.398870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.967 [2024-07-25 01:23:35.398878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.967 [2024-07-25 01:23:35.398884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.967 [2024-07-25 01:23:35.398893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.967 [2024-07-25 01:23:35.398901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.967 [2024-07-25 01:23:35.398908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.967 [2024-07-25 01:23:35.398915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.967 [2024-07-25 01:23:35.398924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.967 [2024-07-25 01:23:35.398931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.967 [2024-07-25 01:23:35.398939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.967 [2024-07-25 01:23:35.398947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.967 [2024-07-25 01:23:35.398956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.967 [2024-07-25 01:23:35.398963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.968 [2024-07-25 01:23:35.398971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.968 [2024-07-25 01:23:35.398978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.968 [2024-07-25 01:23:35.398986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.968 [2024-07-25 01:23:35.398994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.968 [2024-07-25 01:23:35.399003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.968 [2024-07-25 01:23:35.399009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.968 [2024-07-25 01:23:35.399018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.968 [2024-07-25 01:23:35.399024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.968 [2024-07-25 01:23:35.399033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.968 [2024-07-25 01:23:35.399040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.968 [2024-07-25 01:23:35.399056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.968 [2024-07-25 01:23:35.399063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.968 [2024-07-25 01:23:35.399073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.968 [2024-07-25 01:23:35.399080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.968 [2024-07-25 01:23:35.399088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.968 [2024-07-25 01:23:35.399095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.968 [2024-07-25 01:23:35.399104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.968 [2024-07-25 01:23:35.399111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.968 [2024-07-25 01:23:35.399120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.968 [2024-07-25 01:23:35.399127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.968 [2024-07-25 01:23:35.399136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.968 [2024-07-25 01:23:35.399143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.968 [2024-07-25 01:23:35.399154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.968 [2024-07-25 01:23:35.399161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.968 [2024-07-25 01:23:35.399169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.968 [2024-07-25 01:23:35.399176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.968 [2024-07-25 01:23:35.399185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.968 [2024-07-25 01:23:35.399192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.968 [2024-07-25 01:23:35.399200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.968 [2024-07-25 01:23:35.399207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.968 [2024-07-25 01:23:35.399215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.968 [2024-07-25 01:23:35.399222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.968 [2024-07-25 01:23:35.399231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.968 [2024-07-25 01:23:35.399237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.968 [2024-07-25 01:23:35.399246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.968 [2024-07-25 01:23:35.399255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.968 [2024-07-25 01:23:35.399264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.968 [2024-07-25 01:23:35.399270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.968 [2024-07-25 01:23:35.399278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.968 [2024-07-25 01:23:35.399285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.968 [2024-07-25 01:23:35.399293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.968 [2024-07-25 01:23:35.399301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.968 [2024-07-25 01:23:35.399309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.968 [2024-07-25 01:23:35.399316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.968 [2024-07-25 01:23:35.399325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.968 [2024-07-25 01:23:35.399331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.968 [2024-07-25 01:23:35.399340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.968 [2024-07-25 01:23:35.399347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.968 [2024-07-25 01:23:35.399356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.968 [2024-07-25 01:23:35.399363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.968 [2024-07-25 01:23:35.399372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.968 [2024-07-25 01:23:35.399378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.968 [2024-07-25 01:23:35.399386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.968 [2024-07-25 01:23:35.399393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.968 [2024-07-25 01:23:35.399404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.968 [2024-07-25 01:23:35.399411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.968 [2024-07-25 01:23:35.399419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.968 [2024-07-25 01:23:35.399426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.968 [2024-07-25 01:23:35.399434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.968 [2024-07-25 01:23:35.399440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.968 [2024-07-25 01:23:35.399450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.968 [2024-07-25 01:23:35.399457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.968 [2024-07-25 01:23:35.399466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.968 [2024-07-25 01:23:35.399473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.968 [2024-07-25 01:23:35.399481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.968 [2024-07-25 01:23:35.399487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.968 [2024-07-25 01:23:35.399496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.968 [2024-07-25 01:23:35.399503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.968 [2024-07-25 01:23:35.399511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3ec00 is same with the state(5) to be set 00:23:12.968 [2024-07-25 01:23:35.399570] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb3ec00 was disconnected and freed. reset controller. 00:23:12.968 [2024-07-25 01:23:35.399654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.968 [2024-07-25 01:23:35.399664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.968 [2024-07-25 01:23:35.399675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.968 [2024-07-25 01:23:35.399682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.968 [2024-07-25 01:23:35.399691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.968 [2024-07-25 01:23:35.399699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.968 [2024-07-25 01:23:35.399708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.968 [2024-07-25 01:23:35.399715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.968 [2024-07-25 01:23:35.399723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.969 [2024-07-25 01:23:35.399729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.969 [2024-07-25 01:23:35.399738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.969 [2024-07-25 01:23:35.399745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.969 [2024-07-25 01:23:35.399753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.969 [2024-07-25 01:23:35.399759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.969 [2024-07-25 01:23:35.399770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.969 [2024-07-25 01:23:35.399779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.969 [2024-07-25 01:23:35.399788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.969 [2024-07-25 01:23:35.399794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.969 [2024-07-25 01:23:35.399803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.969 [2024-07-25 01:23:35.399809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.969 [2024-07-25 01:23:35.399818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.969 [2024-07-25 01:23:35.399824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.969 [2024-07-25 01:23:35.399832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.969 [2024-07-25 01:23:35.399839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.969 [2024-07-25 01:23:35.399847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.969 [2024-07-25 01:23:35.399854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.969 [2024-07-25 01:23:35.399862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.969 [2024-07-25 01:23:35.399869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.969 [2024-07-25 01:23:35.399877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.969 [2024-07-25 01:23:35.399884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.969 [2024-07-25 01:23:35.399892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.969 [2024-07-25 01:23:35.399899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.969 [2024-07-25 01:23:35.399907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.969 [2024-07-25 01:23:35.399914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.969 [2024-07-25 01:23:35.399922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.969 [2024-07-25 01:23:35.399928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.969 [2024-07-25 01:23:35.399936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.969 [2024-07-25 01:23:35.399947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.969 [2024-07-25 01:23:35.399955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.969 [2024-07-25 01:23:35.399962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.969 [2024-07-25 01:23:35.399972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.969 [2024-07-25 01:23:35.399979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.969 [2024-07-25 01:23:35.399987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.969 [2024-07-25 01:23:35.399993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.969 [2024-07-25 01:23:35.400002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.969 [2024-07-25 01:23:35.400009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.969 [2024-07-25 01:23:35.400017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.969 [2024-07-25 01:23:35.400023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.969 [2024-07-25 01:23:35.400031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.969 [2024-07-25 01:23:35.400038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.969 [2024-07-25 01:23:35.400051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.969 [2024-07-25 01:23:35.400058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.969 [2024-07-25 01:23:35.400067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.969 [2024-07-25 01:23:35.400073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.969 [2024-07-25 01:23:35.400082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.969 [2024-07-25 01:23:35.400088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.969 [2024-07-25 01:23:35.400096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.969 [2024-07-25 01:23:35.400103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.969 [2024-07-25 01:23:35.400111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.969 [2024-07-25 01:23:35.400117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.969 [2024-07-25 01:23:35.400126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.969 [2024-07-25 01:23:35.400132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.969 [2024-07-25 01:23:35.400140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.969 [2024-07-25 01:23:35.400147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.969 [2024-07-25 01:23:35.400155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.969 [2024-07-25 01:23:35.400162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.969 [2024-07-25 01:23:35.400170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.969 [2024-07-25 01:23:35.400177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.969 [2024-07-25 01:23:35.400185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.969 [2024-07-25 01:23:35.400193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.969 [2024-07-25 01:23:35.400202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.969 [2024-07-25 01:23:35.400208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.969 [2024-07-25 01:23:35.400217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.969 [2024-07-25 01:23:35.400224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.969 [2024-07-25 01:23:35.400233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.969 [2024-07-25 01:23:35.400239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.969 [2024-07-25 01:23:35.400247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.969 [2024-07-25 01:23:35.400254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.969 [2024-07-25 01:23:35.400262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.969 [2024-07-25 01:23:35.400269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.969 [2024-07-25 01:23:35.400276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.969 [2024-07-25 01:23:35.400283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.969 [2024-07-25 01:23:35.400291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.969 [2024-07-25 01:23:35.400298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.969 [2024-07-25 01:23:35.400306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.969 [2024-07-25 01:23:35.400312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.969 [2024-07-25 01:23:35.400321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.969 [2024-07-25 01:23:35.400328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.970 [2024-07-25 01:23:35.400335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-07-25 01:23:35.400341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.970 [2024-07-25 01:23:35.400351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-07-25 01:23:35.400358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.970 [2024-07-25 01:23:35.400366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-07-25 01:23:35.400372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.970 [2024-07-25 01:23:35.400382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-07-25 01:23:35.400389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.970 [2024-07-25 01:23:35.400397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-07-25 01:23:35.400403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.970 [2024-07-25 01:23:35.400411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-07-25 01:23:35.400418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.970 [2024-07-25 01:23:35.400426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-07-25 01:23:35.400434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.970 [2024-07-25 01:23:35.400443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-07-25 01:23:35.400449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.970 [2024-07-25 01:23:35.400458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-07-25 01:23:35.400465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.970 [2024-07-25 01:23:35.400473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-07-25 01:23:35.400480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.970 [2024-07-25 01:23:35.400488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-07-25 01:23:35.400494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.970 [2024-07-25 01:23:35.400503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-07-25 01:23:35.400509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.970 [2024-07-25 01:23:35.400517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-07-25 01:23:35.400524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.970 [2024-07-25 01:23:35.400532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-07-25 01:23:35.400539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.970 [2024-07-25 01:23:35.400548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-07-25 01:23:35.400555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.970 [2024-07-25 01:23:35.400563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-07-25 01:23:35.400570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.970 [2024-07-25 01:23:35.400578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-07-25 01:23:35.400585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.970 [2024-07-25 01:23:35.400594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-07-25 01:23:35.400601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.970 [2024-07-25 01:23:35.400609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-07-25 01:23:35.400615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.970 [2024-07-25 01:23:35.400626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-07-25 01:23:35.400633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.970 [2024-07-25 01:23:35.400717] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc6f780 was disconnected and freed. reset controller. 00:23:12.970 [2024-07-25 01:23:35.400818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-07-25 01:23:35.400828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.970 [2024-07-25 01:23:35.400840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-07-25 01:23:35.400847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.970 [2024-07-25 01:23:35.400857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-07-25 01:23:35.400864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.970 [2024-07-25 01:23:35.400872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-07-25 01:23:35.400878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.970 [2024-07-25 01:23:35.400886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-07-25 01:23:35.400893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.970 [2024-07-25 01:23:35.400902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-07-25 01:23:35.400908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.970 [2024-07-25 01:23:35.400921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-07-25 01:23:35.408670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.970 [2024-07-25 01:23:35.408686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-07-25 01:23:35.408695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.970 [2024-07-25 01:23:35.408704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-07-25 01:23:35.408711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.970 [2024-07-25 01:23:35.408720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-07-25 01:23:35.408727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.970 [2024-07-25 01:23:35.408737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-07-25 01:23:35.408744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.970 [2024-07-25 01:23:35.408753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-07-25 01:23:35.408760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.970 [2024-07-25 01:23:35.408769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-07-25 01:23:35.408776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.970 [2024-07-25 01:23:35.408785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-07-25 01:23:35.408793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.970 [2024-07-25 01:23:35.408803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-07-25 01:23:35.408810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.970 [2024-07-25 01:23:35.408820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-07-25 01:23:35.408827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.970 [2024-07-25 01:23:35.408836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-07-25 01:23:35.408843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.970 [2024-07-25 01:23:35.408852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-07-25 01:23:35.408859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.970 [2024-07-25 01:23:35.408869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.970 [2024-07-25 01:23:35.408878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.971 [2024-07-25 01:23:35.408887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.971 [2024-07-25 01:23:35.408895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.971 [2024-07-25 01:23:35.408904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.971 [2024-07-25 01:23:35.408911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.971 [2024-07-25 01:23:35.408920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.971 [2024-07-25 01:23:35.408927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.971 [2024-07-25 01:23:35.408936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.971 [2024-07-25 01:23:35.408943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.971 [2024-07-25 01:23:35.408952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.971 [2024-07-25 01:23:35.408959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.971 [2024-07-25 01:23:35.408968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.971 [2024-07-25 01:23:35.408976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.971 [2024-07-25 01:23:35.408984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.971 [2024-07-25 01:23:35.408991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.971 [2024-07-25 01:23:35.409000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.971 [2024-07-25 01:23:35.409008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.971 [2024-07-25 01:23:35.409017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.971 [2024-07-25 01:23:35.409025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.971 [2024-07-25 01:23:35.409033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.971 [2024-07-25 01:23:35.409040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.971 [2024-07-25 01:23:35.409053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.971 [2024-07-25 01:23:35.409060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.971 [2024-07-25 01:23:35.409069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.971 [2024-07-25 01:23:35.409076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.971 [2024-07-25 01:23:35.409086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.971 [2024-07-25 01:23:35.409094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.971 [2024-07-25 01:23:35.409103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.971 [2024-07-25 01:23:35.409111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.971 [2024-07-25 01:23:35.409120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.971 [2024-07-25 01:23:35.409127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.971 [2024-07-25 01:23:35.409136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.971 [2024-07-25 01:23:35.409144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.971 [2024-07-25 01:23:35.409153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.971 [2024-07-25 01:23:35.409161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.971 [2024-07-25 01:23:35.409169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.971 [2024-07-25 01:23:35.409177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.971 [2024-07-25 01:23:35.409186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.971 [2024-07-25 01:23:35.409192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.971 [2024-07-25 01:23:35.409202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.971 [2024-07-25 01:23:35.409209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.971 [2024-07-25 01:23:35.409217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.971 [2024-07-25 01:23:35.409224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.971 [2024-07-25 01:23:35.409233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.971 [2024-07-25 01:23:35.409240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.971 [2024-07-25 01:23:35.409249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.971 [2024-07-25 01:23:35.409256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.971 [2024-07-25 01:23:35.409265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.971 [2024-07-25 01:23:35.409272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.971 [2024-07-25 01:23:35.409281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.971 [2024-07-25 01:23:35.409290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.971 [2024-07-25 01:23:35.409299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.971 [2024-07-25 01:23:35.409306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.971 [2024-07-25 01:23:35.409315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.971 [2024-07-25 01:23:35.409323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.971 [2024-07-25 01:23:35.409332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.971 [2024-07-25 01:23:35.409339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.971 [2024-07-25 01:23:35.409348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.971 [2024-07-25 01:23:35.409355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.971 [2024-07-25 01:23:35.409364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.971 [2024-07-25 01:23:35.409371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.971 [2024-07-25 01:23:35.409380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.971 [2024-07-25 01:23:35.409388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.972 [2024-07-25 01:23:35.409398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.972 [2024-07-25 01:23:35.409405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.972 [2024-07-25 01:23:35.409414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.972 [2024-07-25 01:23:35.409421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.972 [2024-07-25 01:23:35.409431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.972 [2024-07-25 01:23:35.409438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.972 [2024-07-25 01:23:35.409447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.972 [2024-07-25 01:23:35.409455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.972 [2024-07-25 01:23:35.409463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.972 [2024-07-25 01:23:35.409471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.972 [2024-07-25 01:23:35.409480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.972 [2024-07-25 01:23:35.409489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.972 [2024-07-25 01:23:35.409498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.972 [2024-07-25 01:23:35.409506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.972 [2024-07-25 01:23:35.409515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.972 [2024-07-25 01:23:35.409522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.972 [2024-07-25 01:23:35.409531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.972 [2024-07-25 01:23:35.409538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.972 [2024-07-25 01:23:35.409548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.972 [2024-07-25 01:23:35.409555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.972 [2024-07-25 01:23:35.409565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.972 [2024-07-25 01:23:35.409573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.972 [2024-07-25 01:23:35.409582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.972 [2024-07-25 01:23:35.409589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.972 [2024-07-25 01:23:35.409598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.972 [2024-07-25 01:23:35.409606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.972 [2024-07-25 01:23:35.409615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.972 [2024-07-25 01:23:35.409623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.972 [2024-07-25 01:23:35.409688] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc71f50 was disconnected and freed. reset controller. 00:23:12.972 [2024-07-25 01:23:35.409809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd108d0 (9): Bad file descriptor 00:23:12.972 [2024-07-25 01:23:35.409830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb88c50 (9): Bad file descriptor 00:23:12.972 [2024-07-25 01:23:35.409847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce4630 (9): Bad file descriptor 00:23:12.972 [2024-07-25 01:23:35.409863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd18610 (9): Bad file descriptor 00:23:12.972 [2024-07-25 01:23:35.409878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce3780 (9): Bad file descriptor 00:23:12.972 [2024-07-25 01:23:35.409889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x693340 (9): Bad file descriptor 00:23:12.972 [2024-07-25 01:23:35.409901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb676b0 (9): Bad file descriptor 00:23:12.972 [2024-07-25 01:23:35.409913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb44c70 (9): Bad file descriptor 00:23:12.972 [2024-07-25 01:23:35.409930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce2a60 (9): Bad file descriptor 00:23:12.972 [2024-07-25 01:23:35.409942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8dc50 (9): Bad file descriptor 00:23:12.972 [2024-07-25 01:23:35.410016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.972 [2024-07-25 01:23:35.410027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.972 [2024-07-25 01:23:35.410040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.972 [2024-07-25 01:23:35.410057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.972 [2024-07-25 01:23:35.410066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.972 [2024-07-25 01:23:35.410074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.972 [2024-07-25 01:23:35.410084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.972 [2024-07-25 01:23:35.410091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.972 [2024-07-25 01:23:35.410100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.972 [2024-07-25 01:23:35.410108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.972 [2024-07-25 01:23:35.410117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.972 [2024-07-25 01:23:35.410124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.972 [2024-07-25 01:23:35.410133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.972 [2024-07-25 01:23:35.410140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.972 [2024-07-25 01:23:35.410149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.972 [2024-07-25 01:23:35.410157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.972 [2024-07-25 01:23:35.410165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.972 [2024-07-25 01:23:35.410172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.972 [2024-07-25 01:23:35.410181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.972 [2024-07-25 01:23:35.410189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.972 [2024-07-25 01:23:35.410198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.972 [2024-07-25 01:23:35.410205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.972 [2024-07-25 01:23:35.410214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.972 [2024-07-25 01:23:35.410221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.972 [2024-07-25 01:23:35.410232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.972 [2024-07-25 01:23:35.410240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.972 [2024-07-25 01:23:35.410249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.972 [2024-07-25 01:23:35.410257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.972 [2024-07-25 01:23:35.410267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.972 [2024-07-25 01:23:35.410274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.972 [2024-07-25 01:23:35.410283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.972 [2024-07-25 01:23:35.410291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.972 [2024-07-25 01:23:35.410300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.972 [2024-07-25 01:23:35.410306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.972 [2024-07-25 01:23:35.410316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.972 [2024-07-25 01:23:35.410323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.972 [2024-07-25 01:23:35.410332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.972 [2024-07-25 01:23:35.410339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.972 [2024-07-25 01:23:35.410348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.972 [2024-07-25 01:23:35.410355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.972 [2024-07-25 01:23:35.410364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.973 [2024-07-25 01:23:35.410372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.973 [2024-07-25 01:23:35.410381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.973 [2024-07-25 01:23:35.410389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.973 [2024-07-25 01:23:35.410397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.973 [2024-07-25 01:23:35.410404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.973 [2024-07-25 01:23:35.410413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.973 [2024-07-25 01:23:35.410420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.973 [2024-07-25 01:23:35.410430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.973 [2024-07-25 01:23:35.410439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.973 [2024-07-25 01:23:35.410448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.973 [2024-07-25 01:23:35.410455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.973 [2024-07-25 01:23:35.410464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.973 [2024-07-25 01:23:35.410471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.973 [2024-07-25 01:23:35.410480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.973 [2024-07-25 01:23:35.410488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.973 [2024-07-25 01:23:35.410497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.973 [2024-07-25 01:23:35.410507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.973 [2024-07-25 01:23:35.410516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.973 [2024-07-25 01:23:35.410523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.973 [2024-07-25 01:23:35.410533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.973 [2024-07-25 01:23:35.410539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.973 [2024-07-25 01:23:35.410548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.973 [2024-07-25 01:23:35.410555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.973 [2024-07-25 01:23:35.410564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.973 [2024-07-25 01:23:35.410571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.973 [2024-07-25 01:23:35.410580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.973 [2024-07-25 01:23:35.410587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.973 [2024-07-25 01:23:35.410597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.973 [2024-07-25 01:23:35.410604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.973 [2024-07-25 01:23:35.410613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.973 [2024-07-25 01:23:35.410621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.973 [2024-07-25 01:23:35.410630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.973 [2024-07-25 01:23:35.410637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.973 [2024-07-25 01:23:35.410647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.973 [2024-07-25 01:23:35.410654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.973 [2024-07-25 01:23:35.410663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.973 [2024-07-25 01:23:35.410671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.973 [2024-07-25 01:23:35.410680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.973 [2024-07-25 01:23:35.410687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.973 [2024-07-25 01:23:35.410696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.973 [2024-07-25 01:23:35.410703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.973 [2024-07-25 01:23:35.410712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.973 [2024-07-25 01:23:35.410720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.973 [2024-07-25 01:23:35.410728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.973 [2024-07-25 01:23:35.410735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.973 [2024-07-25 01:23:35.410744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.973 [2024-07-25 01:23:35.410752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.973 [2024-07-25 01:23:35.410762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.973 [2024-07-25 01:23:35.410771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.973 [2024-07-25 01:23:35.410781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.973 [2024-07-25 01:23:35.410788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.973 [2024-07-25 01:23:35.410797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.973 [2024-07-25 01:23:35.410804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.973 [2024-07-25 01:23:35.410813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.973 [2024-07-25 01:23:35.410820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.973 [2024-07-25 01:23:35.410828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.973 [2024-07-25 01:23:35.410836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.973 [2024-07-25 01:23:35.410844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.973 [2024-07-25 01:23:35.410853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.973 [2024-07-25 01:23:35.410862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.973 [2024-07-25 01:23:35.410869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.973 [2024-07-25 01:23:35.410878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.973 [2024-07-25 01:23:35.410885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.973 [2024-07-25 01:23:35.410894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.973 [2024-07-25 01:23:35.410901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.973 [2024-07-25 01:23:35.410909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.973 [2024-07-25 01:23:35.410917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.973 [2024-07-25 01:23:35.410926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.973 [2024-07-25 01:23:35.410933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.973 [2024-07-25 01:23:35.410941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.973 [2024-07-25 01:23:35.410948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.973 [2024-07-25 01:23:35.410957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.973 [2024-07-25 01:23:35.410964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.973 [2024-07-25 01:23:35.410973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.973 [2024-07-25 01:23:35.410980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.973 [2024-07-25 01:23:35.410989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.973 [2024-07-25 01:23:35.410996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.973 [2024-07-25 01:23:35.411005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.973 [2024-07-25 01:23:35.411013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.974 [2024-07-25 01:23:35.411024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.974 [2024-07-25 01:23:35.411032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.974 [2024-07-25 01:23:35.411041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.974 [2024-07-25 01:23:35.411053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.974 [2024-07-25 01:23:35.411064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.974 [2024-07-25 01:23:35.411072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.974 [2024-07-25 01:23:35.411081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.974 [2024-07-25 01:23:35.411088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.974 [2024-07-25 01:23:35.411150] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc7e5c0 was disconnected and freed. reset controller. 00:23:12.974 [2024-07-25 01:23:35.411226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.974 [2024-07-25 01:23:35.411235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.974 [2024-07-25 01:23:35.411246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.974 [2024-07-25 01:23:35.411254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.974 [2024-07-25 01:23:35.411263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.974 [2024-07-25 01:23:35.411270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.974 [2024-07-25 01:23:35.411280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.974 [2024-07-25 01:23:35.411287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.974 [2024-07-25 01:23:35.411296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.974 [2024-07-25 01:23:35.411303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.974 [2024-07-25 01:23:35.411311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.974 [2024-07-25 01:23:35.411319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.974 [2024-07-25 01:23:35.411327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.974 [2024-07-25 01:23:35.411335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.974 [2024-07-25 01:23:35.411344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.974 [2024-07-25 01:23:35.411352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.974 [2024-07-25 01:23:35.411361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.974 [2024-07-25 01:23:35.411368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.974 [2024-07-25 01:23:35.411377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.974 [2024-07-25 01:23:35.411384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.974 [2024-07-25 01:23:35.411396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.974 [2024-07-25 01:23:35.411403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.974 [2024-07-25 01:23:35.411412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.974 [2024-07-25 01:23:35.411421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.974 [2024-07-25 01:23:35.411430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.974 [2024-07-25 01:23:35.411437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.974 [2024-07-25 01:23:35.411446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.974 [2024-07-25 01:23:35.411453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.974 [2024-07-25 01:23:35.411462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.974 [2024-07-25 01:23:35.411469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.974 [2024-07-25 01:23:35.411478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.974 [2024-07-25 01:23:35.411485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.974 [2024-07-25 01:23:35.411494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.974 [2024-07-25 01:23:35.411501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.974 [2024-07-25 01:23:35.411509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.974 [2024-07-25 01:23:35.411517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.974 [2024-07-25 01:23:35.411526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.974 [2024-07-25 01:23:35.411532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.974 [2024-07-25 01:23:35.411541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.974 [2024-07-25 01:23:35.411548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.974 [2024-07-25 01:23:35.411558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.974 [2024-07-25 01:23:35.411565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.974 [2024-07-25 01:23:35.411574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.974 [2024-07-25 01:23:35.411581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.974 [2024-07-25 01:23:35.411590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.974 [2024-07-25 01:23:35.411599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.974 [2024-07-25 01:23:35.411607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.974 [2024-07-25 01:23:35.411616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.974 [2024-07-25 01:23:35.411625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.974 [2024-07-25 01:23:35.411632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.974 [2024-07-25 01:23:35.411641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.974 [2024-07-25 01:23:35.411648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.974 [2024-07-25 01:23:35.411657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.974 [2024-07-25 01:23:35.411664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.974 [2024-07-25 01:23:35.411673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.974 [2024-07-25 01:23:35.411681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.974 [2024-07-25 01:23:35.411690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.974 [2024-07-25 01:23:35.411697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.974 [2024-07-25 01:23:35.411706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.974 [2024-07-25 01:23:35.411713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.974 [2024-07-25 01:23:35.411722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.974 [2024-07-25 01:23:35.411729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.974 [2024-07-25 01:23:35.411738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.974 [2024-07-25 01:23:35.411744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.974 [2024-07-25 01:23:35.411753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.974 [2024-07-25 01:23:35.411760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.974 [2024-07-25 01:23:35.411769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.974 [2024-07-25 01:23:35.411776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.974 [2024-07-25 01:23:35.411785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.975 [2024-07-25 01:23:35.411792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.975 [2024-07-25 01:23:35.411802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.975 [2024-07-25 01:23:35.411810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.975 [2024-07-25 01:23:35.411819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.975 [2024-07-25 01:23:35.411827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.975 [2024-07-25 01:23:35.411835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.975 [2024-07-25 01:23:35.411842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.975 [2024-07-25 01:23:35.411851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.975 [2024-07-25 01:23:35.411858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.975 [2024-07-25 01:23:35.411867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.975 [2024-07-25 01:23:35.411876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.975 [2024-07-25 01:23:35.411885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.975 [2024-07-25 01:23:35.411892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.975 [2024-07-25 01:23:35.411901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.975 [2024-07-25 01:23:35.411908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.975 [2024-07-25 01:23:35.411916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.975 [2024-07-25 01:23:35.411924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.975 [2024-07-25 01:23:35.411932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.975 [2024-07-25 01:23:35.411941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.975 [2024-07-25 01:23:35.411951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.975 [2024-07-25 01:23:35.411958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.975 [2024-07-25 01:23:35.411967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.975 [2024-07-25 01:23:35.411974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.975 [2024-07-25 01:23:35.416808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.975 [2024-07-25 01:23:35.416833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.975 [2024-07-25 01:23:35.416845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.975 [2024-07-25 01:23:35.416863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.975 [2024-07-25 01:23:35.416876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.975 [2024-07-25 01:23:35.416885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.975 [2024-07-25 01:23:35.416898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.975 [2024-07-25 01:23:35.416907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.975 [2024-07-25 01:23:35.416919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.975 [2024-07-25 01:23:35.416929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.975 [2024-07-25 01:23:35.416941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.975 [2024-07-25 01:23:35.416951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.975 [2024-07-25 01:23:35.416963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.975 [2024-07-25 01:23:35.416972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.975 [2024-07-25 01:23:35.416984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.975 [2024-07-25 01:23:35.416993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.975 [2024-07-25 01:23:35.417005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.975 [2024-07-25 01:23:35.417015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.975 [2024-07-25 01:23:35.417027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.975 [2024-07-25 01:23:35.417039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.975 [2024-07-25 01:23:35.417057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.975 [2024-07-25 01:23:35.417067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.975 [2024-07-25 01:23:35.417079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.975 [2024-07-25 01:23:35.417089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.975 [2024-07-25 01:23:35.417101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.975 [2024-07-25 01:23:35.417110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.975 [2024-07-25 01:23:35.417122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.975 [2024-07-25 01:23:35.417134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.975 [2024-07-25 01:23:35.417148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.975 [2024-07-25 01:23:35.417158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.975 [2024-07-25 01:23:35.417170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.975 [2024-07-25 01:23:35.417180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.975 [2024-07-25 01:23:35.417192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.975 [2024-07-25 01:23:35.417202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.975 [2024-07-25 01:23:35.417214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.975 [2024-07-25 01:23:35.417224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.975 [2024-07-25 01:23:35.417293] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc7f9a0 was disconnected and freed. reset controller. 00:23:12.975 [2024-07-25 01:23:35.417391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.975 [2024-07-25 01:23:35.417402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.975 [2024-07-25 01:23:35.417416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.975 [2024-07-25 01:23:35.417426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.975 [2024-07-25 01:23:35.417438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.975 [2024-07-25 01:23:35.417448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.975 [2024-07-25 01:23:35.417459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.975 [2024-07-25 01:23:35.417470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.975 [2024-07-25 01:23:35.417482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.975 [2024-07-25 01:23:35.417491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.975 [2024-07-25 01:23:35.417503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.975 [2024-07-25 01:23:35.417513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.975 [2024-07-25 01:23:35.417524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.975 [2024-07-25 01:23:35.417534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.975 [2024-07-25 01:23:35.417545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.975 [2024-07-25 01:23:35.417555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.975 [2024-07-25 01:23:35.417571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.975 [2024-07-25 01:23:35.417580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.975 [2024-07-25 01:23:35.417592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.976 [2024-07-25 01:23:35.417602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.976 [2024-07-25 01:23:35.417614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.976 [2024-07-25 01:23:35.417624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.976 [2024-07-25 01:23:35.417637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.976 [2024-07-25 01:23:35.417646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.976 [2024-07-25 01:23:35.417659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.976 [2024-07-25 01:23:35.417668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.976 [2024-07-25 01:23:35.417680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.976 [2024-07-25 01:23:35.417689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.976 [2024-07-25 01:23:35.417701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.976 [2024-07-25 01:23:35.417710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.976 [2024-07-25 01:23:35.417722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.976 [2024-07-25 01:23:35.417732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.976 [2024-07-25 01:23:35.417744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.976 [2024-07-25 01:23:35.417754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.976 [2024-07-25 01:23:35.417765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.976 [2024-07-25 01:23:35.417775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.976 [2024-07-25 01:23:35.417786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.976 [2024-07-25 01:23:35.417796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.976 [2024-07-25 01:23:35.417807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.976 [2024-07-25 01:23:35.417817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.976 [2024-07-25 01:23:35.417829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.976 [2024-07-25 01:23:35.417840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.976 [2024-07-25 01:23:35.417852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.976 [2024-07-25 01:23:35.417861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.976 [2024-07-25 01:23:35.417873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.976 [2024-07-25 01:23:35.417882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.976 [2024-07-25 01:23:35.417894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.976 [2024-07-25 01:23:35.417904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.976 [2024-07-25 01:23:35.417916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.976 [2024-07-25 01:23:35.417926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.976 [2024-07-25 01:23:35.417938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.976 [2024-07-25 01:23:35.417948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.976 [2024-07-25 01:23:35.417960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.976 [2024-07-25 01:23:35.417970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.976 [2024-07-25 01:23:35.417982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.976 [2024-07-25 01:23:35.417992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.976 [2024-07-25 01:23:35.418005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.976 [2024-07-25 01:23:35.418016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.976 [2024-07-25 01:23:35.418027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.976 [2024-07-25 01:23:35.418037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.976 [2024-07-25 01:23:35.418054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.976 [2024-07-25 01:23:35.418064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.976 [2024-07-25 01:23:35.418075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.976 [2024-07-25 01:23:35.418085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.976 [2024-07-25 01:23:35.418098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.976 [2024-07-25 01:23:35.418108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.976 [2024-07-25 01:23:35.418122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.976 [2024-07-25 01:23:35.418132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.976 [2024-07-25 01:23:35.418144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.976 [2024-07-25 01:23:35.418154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.976 [2024-07-25 01:23:35.418166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.976 [2024-07-25 01:23:35.418176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.976 [2024-07-25 01:23:35.418190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.976 [2024-07-25 01:23:35.418199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.976 [2024-07-25 01:23:35.418212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.976 [2024-07-25 01:23:35.418223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.976 [2024-07-25 01:23:35.418235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.976 [2024-07-25 01:23:35.418245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.976 [2024-07-25 01:23:35.418258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.976 [2024-07-25 01:23:35.418267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.976 [2024-07-25 01:23:35.418279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.976 [2024-07-25 01:23:35.418289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.976 [2024-07-25 01:23:35.418301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.976 [2024-07-25 01:23:35.418311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.977 [2024-07-25 01:23:35.418323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.977 [2024-07-25 01:23:35.418333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.977 [2024-07-25 01:23:35.418346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.977 [2024-07-25 01:23:35.418356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.977 [2024-07-25 01:23:35.418368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.977 [2024-07-25 01:23:35.418378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.977 [2024-07-25 01:23:35.418390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.977 [2024-07-25 01:23:35.418402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.977 [2024-07-25 01:23:35.418414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.977 [2024-07-25 01:23:35.418424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.977 [2024-07-25 01:23:35.418436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.977 [2024-07-25 01:23:35.418446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.977 [2024-07-25 01:23:35.418459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.977 [2024-07-25 01:23:35.418469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.977 [2024-07-25 01:23:35.418481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.977 [2024-07-25 01:23:35.418491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.977 [2024-07-25 01:23:35.418503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.977 [2024-07-25 01:23:35.418513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.977 [2024-07-25 01:23:35.418525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.977 [2024-07-25 01:23:35.418534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.977 [2024-07-25 01:23:35.418546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.977 [2024-07-25 01:23:35.418556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.977 [2024-07-25 01:23:35.418568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.977 [2024-07-25 01:23:35.418578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.977 [2024-07-25 01:23:35.418590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.977 [2024-07-25 01:23:35.418600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.977 [2024-07-25 01:23:35.418612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.977 [2024-07-25 01:23:35.418621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.977 [2024-07-25 01:23:35.418633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.977 [2024-07-25 01:23:35.418643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.977 [2024-07-25 01:23:35.418654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.977 [2024-07-25 01:23:35.418664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.977 [2024-07-25 01:23:35.418678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.977 [2024-07-25 01:23:35.418690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.977 [2024-07-25 01:23:35.418702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.977 [2024-07-25 01:23:35.418713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.977 [2024-07-25 01:23:35.418725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.977 [2024-07-25 01:23:35.418734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.977 [2024-07-25 01:23:35.418746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.977 [2024-07-25 01:23:35.418756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.977 [2024-07-25 01:23:35.418768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.977 [2024-07-25 01:23:35.418778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.977 [2024-07-25 01:23:35.418790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.977 [2024-07-25 01:23:35.418800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.977 [2024-07-25 01:23:35.418865] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc80e40 was disconnected and freed. reset controller. 00:23:12.977 [2024-07-25 01:23:35.418959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.977 [2024-07-25 01:23:35.418971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.977 [2024-07-25 01:23:35.418985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.977 [2024-07-25 01:23:35.418995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.977 [2024-07-25 01:23:35.419007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.977 [2024-07-25 01:23:35.419017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.977 [2024-07-25 01:23:35.419030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.977 [2024-07-25 01:23:35.419039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.977 [2024-07-25 01:23:35.419056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.977 [2024-07-25 01:23:35.419067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.977 [2024-07-25 01:23:35.419079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.977 [2024-07-25 01:23:35.419089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.977 [2024-07-25 01:23:35.419104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.977 [2024-07-25 01:23:35.419114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.977 [2024-07-25 01:23:35.419126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.977 [2024-07-25 01:23:35.419136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.977 [2024-07-25 01:23:35.419148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.977 [2024-07-25 01:23:35.419158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.977 [2024-07-25 01:23:35.419170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.977 [2024-07-25 01:23:35.419180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.977 [2024-07-25 01:23:35.419193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.977 [2024-07-25 01:23:35.419203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.977 [2024-07-25 01:23:35.419215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.977 [2024-07-25 01:23:35.419225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.977 [2024-07-25 01:23:35.419237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.977 [2024-07-25 01:23:35.419247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.977 [2024-07-25 01:23:35.419258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.977 [2024-07-25 01:23:35.419268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.977 [2024-07-25 01:23:35.419280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.977 [2024-07-25 01:23:35.419290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.977 [2024-07-25 01:23:35.419301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.977 [2024-07-25 01:23:35.419310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.977 [2024-07-25 01:23:35.419322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.977 [2024-07-25 01:23:35.419332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.978 [2024-07-25 01:23:35.419344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.978 [2024-07-25 01:23:35.419354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.978 [2024-07-25 01:23:35.419366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.978 [2024-07-25 01:23:35.419378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.978 [2024-07-25 01:23:35.419390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.978 [2024-07-25 01:23:35.419400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.978 [2024-07-25 01:23:35.419412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.978 [2024-07-25 01:23:35.419421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.978 [2024-07-25 01:23:35.419433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.978 [2024-07-25 01:23:35.419443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.978 [2024-07-25 01:23:35.419456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.978 [2024-07-25 01:23:35.419466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.978 [2024-07-25 01:23:35.419478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.978 [2024-07-25 01:23:35.419487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.978 [2024-07-25 01:23:35.419499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.978 [2024-07-25 01:23:35.419509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.978 [2024-07-25 01:23:35.419521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.978 [2024-07-25 01:23:35.419531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.978 [2024-07-25 01:23:35.419543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.978 [2024-07-25 01:23:35.419553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.978 [2024-07-25 01:23:35.419565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.978 [2024-07-25 01:23:35.419584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.978 [2024-07-25 01:23:35.419596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.978 [2024-07-25 01:23:35.419605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.978 [2024-07-25 01:23:35.419616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.978 [2024-07-25 01:23:35.419626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.978 [2024-07-25 01:23:35.419638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.978 [2024-07-25 01:23:35.419648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.978 [2024-07-25 01:23:35.419662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.978 [2024-07-25 01:23:35.419672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.978 [2024-07-25 01:23:35.419683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.978 [2024-07-25 01:23:35.419692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.978 [2024-07-25 01:23:35.419704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.978 [2024-07-25 01:23:35.419714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.978 [2024-07-25 01:23:35.419726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.978 [2024-07-25 01:23:35.419736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.978 [2024-07-25 01:23:35.419747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.978 [2024-07-25 01:23:35.419757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.978 [2024-07-25 01:23:35.419769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.978 [2024-07-25 01:23:35.419780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.978 [2024-07-25 01:23:35.419793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.978 [2024-07-25 01:23:35.419803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.978 [2024-07-25 01:23:35.419815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.978 [2024-07-25 01:23:35.419825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.978 [2024-07-25 01:23:35.419837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.978 [2024-07-25 01:23:35.419847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.978 [2024-07-25 01:23:35.419859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.978 [2024-07-25 01:23:35.419868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.978 [2024-07-25 01:23:35.419881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.978 [2024-07-25 01:23:35.419891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.978 [2024-07-25 01:23:35.419903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.978 [2024-07-25 01:23:35.419913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.978 [2024-07-25 01:23:35.419926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.978 [2024-07-25 01:23:35.419938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.978 [2024-07-25 01:23:35.419951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.978 [2024-07-25 01:23:35.419961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.978 [2024-07-25 01:23:35.419974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.978 [2024-07-25 01:23:35.419983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.978 [2024-07-25 01:23:35.419995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.978 [2024-07-25 01:23:35.420005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.978 [2024-07-25 01:23:35.420017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.978 [2024-07-25 01:23:35.420026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.978 [2024-07-25 01:23:35.420038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.978 [2024-07-25 01:23:35.420053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.978 [2024-07-25 01:23:35.420066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.978 [2024-07-25 01:23:35.420076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.978 [2024-07-25 01:23:35.420088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.978 [2024-07-25 01:23:35.420098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.978 [2024-07-25 01:23:35.420110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.978 [2024-07-25 01:23:35.420120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.978 [2024-07-25 01:23:35.420132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.978 [2024-07-25 01:23:35.420142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.978 [2024-07-25 01:23:35.420154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.978 [2024-07-25 01:23:35.420164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.978 [2024-07-25 01:23:35.420176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.978 [2024-07-25 01:23:35.420186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.978 [2024-07-25 01:23:35.420198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.978 [2024-07-25 01:23:35.420208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.978 [2024-07-25 01:23:35.420221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.979 [2024-07-25 01:23:35.420231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.979 [2024-07-25 01:23:35.420243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.979 [2024-07-25 01:23:35.420253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.979 [2024-07-25 01:23:35.420266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.979 [2024-07-25 01:23:35.420276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.979 [2024-07-25 01:23:35.420288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.979 [2024-07-25 01:23:35.420298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.979 [2024-07-25 01:23:35.420310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.979 [2024-07-25 01:23:35.420320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.979 [2024-07-25 01:23:35.420332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.979 [2024-07-25 01:23:35.420343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.979 [2024-07-25 01:23:35.420355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.979 [2024-07-25 01:23:35.420365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.979 [2024-07-25 01:23:35.420376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.979 [2024-07-25 01:23:35.420386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.979 [2024-07-25 01:23:35.420451] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc82370 was disconnected and freed. reset controller. 00:23:12.979 [2024-07-25 01:23:35.424506] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:12.979 [2024-07-25 01:23:35.424549] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:12.979 [2024-07-25 01:23:35.424564] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:12.979 [2024-07-25 01:23:35.424577] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:12.979 [2024-07-25 01:23:35.424591] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:12.979 [2024-07-25 01:23:35.424605] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:12.979 [2024-07-25 01:23:35.424620] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:12.979 [2024-07-25 01:23:35.429420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:12.979 [2024-07-25 01:23:35.429813] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:12.979 [2024-07-25 01:23:35.430112] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:12.979 [2024-07-25 01:23:35.430409] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:12.979 [2024-07-25 01:23:35.430451] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.979 [2024-07-25 01:23:35.431002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.979 [2024-07-25 01:23:35.431019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x693340 with addr=10.0.0.2, port=4420 00:23:12.979 [2024-07-25 01:23:35.431029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x693340 is same with the state(5) to be set 00:23:12.979 [2024-07-25 01:23:35.432083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.979 [2024-07-25 01:23:35.432099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.979 [2024-07-25 01:23:35.432112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.979 [2024-07-25 01:23:35.432120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.979 [2024-07-25 01:23:35.432131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.979 [2024-07-25 01:23:35.432139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.979 [2024-07-25 01:23:35.432150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.979 [2024-07-25 01:23:35.432158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.979 [2024-07-25 01:23:35.432167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.979 [2024-07-25 01:23:35.432175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.979 [2024-07-25 01:23:35.432184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.979 [2024-07-25 01:23:35.432192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.979 [2024-07-25 01:23:35.432202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.979 [2024-07-25 01:23:35.432211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.979 [2024-07-25 01:23:35.432220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.979 [2024-07-25 01:23:35.432228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.979 [2024-07-25 01:23:35.432237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.979 [2024-07-25 01:23:35.432244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.979 [2024-07-25 01:23:35.432254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.979 [2024-07-25 01:23:35.432262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.979 [2024-07-25 01:23:35.432272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.979 [2024-07-25 01:23:35.432281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.979 [2024-07-25 01:23:35.432294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.979 [2024-07-25 01:23:35.432302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.979 [2024-07-25 01:23:35.432313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.979 [2024-07-25 01:23:35.432320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.979 [2024-07-25 01:23:35.432330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.979 [2024-07-25 01:23:35.432337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.979 [2024-07-25 01:23:35.432346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.979 [2024-07-25 01:23:35.432354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.979 [2024-07-25 01:23:35.432364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.979 [2024-07-25 01:23:35.432371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.979 [2024-07-25 01:23:35.432381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.979 [2024-07-25 01:23:35.432389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.979 [2024-07-25 01:23:35.432398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.979 [2024-07-25 01:23:35.432405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.979 [2024-07-25 01:23:35.432416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.979 [2024-07-25 01:23:35.432424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.979 [2024-07-25 01:23:35.432434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.979 [2024-07-25 01:23:35.432442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.979 [2024-07-25 01:23:35.432452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.979 [2024-07-25 01:23:35.432459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.979 [2024-07-25 01:23:35.432469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.979 [2024-07-25 01:23:35.432477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.979 [2024-07-25 01:23:35.432487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.979 [2024-07-25 01:23:35.432494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.979 [2024-07-25 01:23:35.432504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.979 [2024-07-25 01:23:35.432514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.979 [2024-07-25 01:23:35.432524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.979 [2024-07-25 01:23:35.432532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.979 [2024-07-25 01:23:35.432541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.980 [2024-07-25 01:23:35.432549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.980 [2024-07-25 01:23:35.432558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.980 [2024-07-25 01:23:35.432566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.980 [2024-07-25 01:23:35.432575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.980 [2024-07-25 01:23:35.432583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.980 [2024-07-25 01:23:35.432592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.980 [2024-07-25 01:23:35.432600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.980 [2024-07-25 01:23:35.432609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.980 [2024-07-25 01:23:35.432616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.980 [2024-07-25 01:23:35.432626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.980 [2024-07-25 01:23:35.432634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.980 [2024-07-25 01:23:35.432643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.980 [2024-07-25 01:23:35.432651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.980 [2024-07-25 01:23:35.432660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.980 [2024-07-25 01:23:35.432667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.980 [2024-07-25 01:23:35.432677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.980 [2024-07-25 01:23:35.432685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.980 [2024-07-25 01:23:35.432694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.980 [2024-07-25 01:23:35.432702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.980 [2024-07-25 01:23:35.432711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.980 [2024-07-25 01:23:35.432719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.980 [2024-07-25 01:23:35.432733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.980 [2024-07-25 01:23:35.432742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.980 [2024-07-25 01:23:35.432751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.980 [2024-07-25 01:23:35.432759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.980 [2024-07-25 01:23:35.432769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.980 [2024-07-25 01:23:35.432776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.980 [2024-07-25 01:23:35.432786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.980 [2024-07-25 01:23:35.432793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.980 [2024-07-25 01:23:35.432803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.980 [2024-07-25 01:23:35.432811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.980 [2024-07-25 01:23:35.432820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.980 [2024-07-25 01:23:35.432828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.980 [2024-07-25 01:23:35.432837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.980 [2024-07-25 01:23:35.432845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.980 [2024-07-25 01:23:35.432854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.980 [2024-07-25 01:23:35.432862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.980 [2024-07-25 01:23:35.432871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.980 [2024-07-25 01:23:35.432878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.980 [2024-07-25 01:23:35.432887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.980 [2024-07-25 01:23:35.432895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.980 [2024-07-25 01:23:35.432904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.980 [2024-07-25 01:23:35.432912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.980 [2024-07-25 01:23:35.432921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.980 [2024-07-25 01:23:35.432929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.980 [2024-07-25 01:23:35.432938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.980 [2024-07-25 01:23:35.432947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.980 [2024-07-25 01:23:35.432957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.980 [2024-07-25 01:23:35.432964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.980 [2024-07-25 01:23:35.432974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.980 [2024-07-25 01:23:35.432981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.980 [2024-07-25 01:23:35.432990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.980 [2024-07-25 01:23:35.432998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.980 [2024-07-25 01:23:35.433008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.980 [2024-07-25 01:23:35.433016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.980 [2024-07-25 01:23:35.433026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.980 [2024-07-25 01:23:35.433033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.980 [2024-07-25 01:23:35.433046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.980 [2024-07-25 01:23:35.433054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.980 [2024-07-25 01:23:35.433063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.980 [2024-07-25 01:23:35.433071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.980 [2024-07-25 01:23:35.433081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.980 [2024-07-25 01:23:35.433088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.980 [2024-07-25 01:23:35.433098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.980 [2024-07-25 01:23:35.433105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.980 [2024-07-25 01:23:35.433114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.980 [2024-07-25 01:23:35.433122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.981 [2024-07-25 01:23:35.433131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.981 [2024-07-25 01:23:35.433139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.981 [2024-07-25 01:23:35.433151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.981 [2024-07-25 01:23:35.433158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.981 [2024-07-25 01:23:35.433169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.981 [2024-07-25 01:23:35.433176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.981 [2024-07-25 01:23:35.433186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.981 [2024-07-25 01:23:35.433194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.981 [2024-07-25 01:23:35.433203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.981 [2024-07-25 01:23:35.433211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.981 [2024-07-25 01:23:35.433219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3ff60 is same with the state(5) to be set 00:23:12.981 [2024-07-25 01:23:35.434375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.981 [2024-07-25 01:23:35.434390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.981 [2024-07-25 01:23:35.434404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.981 [2024-07-25 01:23:35.434412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.981 [2024-07-25 01:23:35.434421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.981 [2024-07-25 01:23:35.434430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.981 [2024-07-25 01:23:35.434440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.981 [2024-07-25 01:23:35.434450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.981 [2024-07-25 01:23:35.434459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.981 [2024-07-25 01:23:35.434466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.981 [2024-07-25 01:23:35.434476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.981 [2024-07-25 01:23:35.434485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.981 [2024-07-25 01:23:35.434495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.981 [2024-07-25 01:23:35.434503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.981 [2024-07-25 01:23:35.434512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.981 [2024-07-25 01:23:35.434520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.981 [2024-07-25 01:23:35.434530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.981 [2024-07-25 01:23:35.434538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.981 [2024-07-25 01:23:35.434550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.981 [2024-07-25 01:23:35.434558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.981 [2024-07-25 01:23:35.434569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.981 [2024-07-25 01:23:35.434576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.981 [2024-07-25 01:23:35.434587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.981 [2024-07-25 01:23:35.434595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.981 [2024-07-25 01:23:35.434604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.981 [2024-07-25 01:23:35.434612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.981 [2024-07-25 01:23:35.434622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.981 [2024-07-25 01:23:35.434630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.981 [2024-07-25 01:23:35.434640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.981 [2024-07-25 01:23:35.434648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.981 [2024-07-25 01:23:35.434657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.981 [2024-07-25 01:23:35.434665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.981 [2024-07-25 01:23:35.434674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.981 [2024-07-25 01:23:35.434681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.981 [2024-07-25 01:23:35.434691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.981 [2024-07-25 01:23:35.434699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.981 [2024-07-25 01:23:35.434708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.981 [2024-07-25 01:23:35.434716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.981 [2024-07-25 01:23:35.434726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.981 [2024-07-25 01:23:35.434733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.981 [2024-07-25 01:23:35.434743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.981 [2024-07-25 01:23:35.434751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.981 [2024-07-25 01:23:35.434761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.981 [2024-07-25 01:23:35.434770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.981 [2024-07-25 01:23:35.434779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.981 [2024-07-25 01:23:35.434786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.981 [2024-07-25 01:23:35.434795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.981 [2024-07-25 01:23:35.434804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.981 [2024-07-25 01:23:35.434813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.981 [2024-07-25 01:23:35.434820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.981 [2024-07-25 01:23:35.434829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.981 [2024-07-25 01:23:35.434836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.981 [2024-07-25 01:23:35.434846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.981 [2024-07-25 01:23:35.434854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.981 [2024-07-25 01:23:35.434864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.981 [2024-07-25 01:23:35.434872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.981 [2024-07-25 01:23:35.434881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.981 [2024-07-25 01:23:35.434888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.981 [2024-07-25 01:23:35.434897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.981 [2024-07-25 01:23:35.434904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.981 [2024-07-25 01:23:35.434913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.981 [2024-07-25 01:23:35.434921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.981 [2024-07-25 01:23:35.434931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.981 [2024-07-25 01:23:35.434938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.981 [2024-07-25 01:23:35.434947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.981 [2024-07-25 01:23:35.434954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.982 [2024-07-25 01:23:35.434964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.982 [2024-07-25 01:23:35.434973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.982 [2024-07-25 01:23:35.434983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.982 [2024-07-25 01:23:35.434991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.982 [2024-07-25 01:23:35.435000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.982 [2024-07-25 01:23:35.435008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.982 [2024-07-25 01:23:35.435017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.982 [2024-07-25 01:23:35.435025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.982 [2024-07-25 01:23:35.435035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.982 [2024-07-25 01:23:35.435046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.982 [2024-07-25 01:23:35.435056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.982 [2024-07-25 01:23:35.435063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.982 [2024-07-25 01:23:35.435073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.982 [2024-07-25 01:23:35.435080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.982 [2024-07-25 01:23:35.435090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.982 [2024-07-25 01:23:35.435097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.982 [2024-07-25 01:23:35.435107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.982 [2024-07-25 01:23:35.435114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.982 [2024-07-25 01:23:35.435123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.982 [2024-07-25 01:23:35.435131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.982 [2024-07-25 01:23:35.435141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.982 [2024-07-25 01:23:35.435149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.982 [2024-07-25 01:23:35.435159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.982 [2024-07-25 01:23:35.435166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.982 [2024-07-25 01:23:35.435176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.982 [2024-07-25 01:23:35.435183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.982 [2024-07-25 01:23:35.435193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.982 [2024-07-25 01:23:35.435202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.982 [2024-07-25 01:23:35.435212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.982 [2024-07-25 01:23:35.435219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.982 [2024-07-25 01:23:35.435231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.982 [2024-07-25 01:23:35.435238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.982 [2024-07-25 01:23:35.435248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.982 [2024-07-25 01:23:35.435255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.982 [2024-07-25 01:23:35.435265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.982 [2024-07-25 01:23:35.435275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.982 [2024-07-25 01:23:35.435285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.982 [2024-07-25 01:23:35.435292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.982 [2024-07-25 01:23:35.435302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.982 [2024-07-25 01:23:35.435310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.982 [2024-07-25 01:23:35.435319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.982 [2024-07-25 01:23:35.435326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.982 [2024-07-25 01:23:35.435335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.982 [2024-07-25 01:23:35.435343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.982 [2024-07-25 01:23:35.435352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.982 [2024-07-25 01:23:35.435361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.982 [2024-07-25 01:23:35.435370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.982 [2024-07-25 01:23:35.435378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.982 [2024-07-25 01:23:35.435387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.982 [2024-07-25 01:23:35.435394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.982 [2024-07-25 01:23:35.435404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.982 [2024-07-25 01:23:35.435411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.982 [2024-07-25 01:23:35.435422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.982 [2024-07-25 01:23:35.435430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.982 [2024-07-25 01:23:35.435439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.982 [2024-07-25 01:23:35.435446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.982 [2024-07-25 01:23:35.435455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.982 [2024-07-25 01:23:35.435462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.982 [2024-07-25 01:23:35.435472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.982 [2024-07-25 01:23:35.435480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.982 [2024-07-25 01:23:35.435489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.982 [2024-07-25 01:23:35.435496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.982 [2024-07-25 01:23:35.435505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc70bc0 is same with the state(5) to be set 00:23:12.982 [2024-07-25 01:23:35.436910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.982 [2024-07-25 01:23:35.436928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.982 [2024-07-25 01:23:35.436943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.982 [2024-07-25 01:23:35.436950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.982 [2024-07-25 01:23:35.436961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.982 [2024-07-25 01:23:35.436967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.982 [2024-07-25 01:23:35.436976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.982 [2024-07-25 01:23:35.436983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.982 [2024-07-25 01:23:35.436991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.982 [2024-07-25 01:23:35.436998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.982 [2024-07-25 01:23:35.437005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.982 [2024-07-25 01:23:35.437012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.982 [2024-07-25 01:23:35.437020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.982 [2024-07-25 01:23:35.437026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.982 [2024-07-25 01:23:35.437035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.982 [2024-07-25 01:23:35.437051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.983 [2024-07-25 01:23:35.437060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.983 [2024-07-25 01:23:35.437066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.983 [2024-07-25 01:23:35.437074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.983 [2024-07-25 01:23:35.437080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.983 [2024-07-25 01:23:35.437089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.983 [2024-07-25 01:23:35.437096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.983 [2024-07-25 01:23:35.437105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.983 [2024-07-25 01:23:35.437111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.983 [2024-07-25 01:23:35.437119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.983 [2024-07-25 01:23:35.437126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.983 [2024-07-25 01:23:35.437133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.983 [2024-07-25 01:23:35.437140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.983 [2024-07-25 01:23:35.437148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.983 [2024-07-25 01:23:35.437155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.983 [2024-07-25 01:23:35.437163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.983 [2024-07-25 01:23:35.437169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.983 [2024-07-25 01:23:35.437177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.983 [2024-07-25 01:23:35.437184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.983 [2024-07-25 01:23:35.437192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.983 [2024-07-25 01:23:35.437199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.983 [2024-07-25 01:23:35.437210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.983 [2024-07-25 01:23:35.437217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.983 [2024-07-25 01:23:35.437225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.983 [2024-07-25 01:23:35.437231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.983 [2024-07-25 01:23:35.437240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.983 [2024-07-25 01:23:35.437247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.983 [2024-07-25 01:23:35.437256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.983 [2024-07-25 01:23:35.437262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.983 [2024-07-25 01:23:35.437270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.983 [2024-07-25 01:23:35.437277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.983 [2024-07-25 01:23:35.437285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.983 [2024-07-25 01:23:35.437291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.983 [2024-07-25 01:23:35.437300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.983 [2024-07-25 01:23:35.437306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.983 [2024-07-25 01:23:35.437315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.983 [2024-07-25 01:23:35.437321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.983 [2024-07-25 01:23:35.437330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.983 [2024-07-25 01:23:35.437336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.983 [2024-07-25 01:23:35.437344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.983 [2024-07-25 01:23:35.437350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.983 [2024-07-25 01:23:35.437358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.983 [2024-07-25 01:23:35.437365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.983 [2024-07-25 01:23:35.437373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.983 [2024-07-25 01:23:35.437380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.983 [2024-07-25 01:23:35.437388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.983 [2024-07-25 01:23:35.437394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.983 [2024-07-25 01:23:35.437402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.983 [2024-07-25 01:23:35.437409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.983 [2024-07-25 01:23:35.437418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.983 [2024-07-25 01:23:35.437426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.983 [2024-07-25 01:23:35.437434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.983 [2024-07-25 01:23:35.437441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.983 [2024-07-25 01:23:35.437452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.983 [2024-07-25 01:23:35.437459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.983 [2024-07-25 01:23:35.437468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.983 [2024-07-25 01:23:35.437475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.983 [2024-07-25 01:23:35.437483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.983 [2024-07-25 01:23:35.437490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.983 [2024-07-25 01:23:35.437498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.983 [2024-07-25 01:23:35.437504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.983 [2024-07-25 01:23:35.437512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.983 [2024-07-25 01:23:35.437519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.983 [2024-07-25 01:23:35.437528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.983 [2024-07-25 01:23:35.437534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.983 [2024-07-25 01:23:35.437542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.983 [2024-07-25 01:23:35.437549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.983 [2024-07-25 01:23:35.437557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.983 [2024-07-25 01:23:35.437563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.983 [2024-07-25 01:23:35.437571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.983 [2024-07-25 01:23:35.437578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.983 [2024-07-25 01:23:35.437587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.983 [2024-07-25 01:23:35.437593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.983 [2024-07-25 01:23:35.437601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.983 [2024-07-25 01:23:35.437608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.983 [2024-07-25 01:23:35.437617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.983 [2024-07-25 01:23:35.437625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.983 [2024-07-25 01:23:35.437633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.983 [2024-07-25 01:23:35.437639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.984 [2024-07-25 01:23:35.437648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.984 [2024-07-25 01:23:35.437654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.984 [2024-07-25 01:23:35.437662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.984 [2024-07-25 01:23:35.437668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.984 [2024-07-25 01:23:35.437676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.984 [2024-07-25 01:23:35.437682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.984 [2024-07-25 01:23:35.437693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.984 [2024-07-25 01:23:35.437700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.984 [2024-07-25 01:23:35.437708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.984 [2024-07-25 01:23:35.437715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.984 [2024-07-25 01:23:35.437723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.984 [2024-07-25 01:23:35.437732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.984 [2024-07-25 01:23:35.437742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.984 [2024-07-25 01:23:35.437749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.984 [2024-07-25 01:23:35.437757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.984 [2024-07-25 01:23:35.437763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.984 [2024-07-25 01:23:35.437772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.984 [2024-07-25 01:23:35.437778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.984 [2024-07-25 01:23:35.437786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.984 [2024-07-25 01:23:35.437793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.984 [2024-07-25 01:23:35.437801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.984 [2024-07-25 01:23:35.437810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.984 [2024-07-25 01:23:35.437818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.984 [2024-07-25 01:23:35.437824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.984 [2024-07-25 01:23:35.437832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.984 [2024-07-25 01:23:35.437839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.984 [2024-07-25 01:23:35.437848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.984 [2024-07-25 01:23:35.437855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.984 [2024-07-25 01:23:35.437864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.984 [2024-07-25 01:23:35.437871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.984 [2024-07-25 01:23:35.437879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.984 [2024-07-25 01:23:35.437886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.984 [2024-07-25 01:23:35.437895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.984 [2024-07-25 01:23:35.437902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.984 [2024-07-25 01:23:35.437909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc733d0 is same with the state(5) to be set 00:23:13.246 [2024-07-25 01:23:35.453586] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:13.246 [2024-07-25 01:23:35.453620] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:23:13.246 [2024-07-25 01:23:35.453633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:13.246 [2024-07-25 01:23:35.453644] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:13.246 [2024-07-25 01:23:35.453656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:13.246 [2024-07-25 01:23:35.454205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:13.246 [2024-07-25 01:23:35.454229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb44c70 with addr=10.0.0.2, port=4420 00:23:13.246 [2024-07-25 01:23:35.454241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb44c70 is same with the state(5) to be set 00:23:13.246 [2024-07-25 01:23:35.454260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x693340 (9): Bad file descriptor 00:23:13.246 [2024-07-25 01:23:35.454297] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:13.246 [2024-07-25 01:23:35.454319] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:13.246 [2024-07-25 01:23:35.454333] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:13.246 [2024-07-25 01:23:35.454346] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:13.246 [2024-07-25 01:23:35.454363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb44c70 (9): Bad file descriptor 00:23:13.246 [2024-07-25 01:23:35.454483] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:13.246 [2024-07-25 01:23:35.454501] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:13.246 task offset: 24576 on job bdev=Nvme5n1 fails 00:23:13.246 00:23:13.246 Latency(us) 00:23:13.246 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:13.246 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:13.246 Job: Nvme1n1 ended in about 0.86 seconds with error 00:23:13.246 Verification LBA range: start 0x0 length 0x400 00:23:13.246 Nvme1n1 : 0.86 149.13 9.32 74.56 0.00 283128.88 22795.13 282659.62 00:23:13.246 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:13.246 Job: Nvme2n1 ended in about 0.86 seconds with error 00:23:13.246 Verification LBA range: start 0x0 length 0x400 00:23:13.246 Nvme2n1 : 0.86 223.32 13.96 74.44 0.00 208695.87 21199.47 224304.08 00:23:13.246 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:13.246 Job: Nvme3n1 ended in about 0.86 seconds with error 00:23:13.246 Verification LBA range: start 0x0 length 0x400 00:23:13.246 Nvme3n1 : 0.86 148.70 9.29 74.35 0.00 273431.15 29177.77 271717.95 00:23:13.246 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:13.246 Job: Nvme4n1 ended in about 0.86 seconds with error 00:23:13.246 Verification LBA range: start 0x0 length 0x400 00:23:13.246 Nvme4n1 : 0.86 148.53 9.28 74.26 0.00 268461.93 24390.79 269894.34 00:23:13.246 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:13.246 Job: Nvme5n1 ended in about 0.85 seconds with error 00:23:13.246 Verification LBA range: start 0x0 length 0x400 00:23:13.246 Nvme5n1 : 0.85 224.79 14.05 74.93 0.00 195359.83 20515.62 226127.69 00:23:13.246 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:13.246 Job: Nvme6n1 ended in about 0.87 seconds with error 00:23:13.246 Verification LBA range: start 0x0 length 0x400 00:23:13.246 Nvme6n1 : 0.87 147.66 9.23 73.83 0.00 259551.35 20629.59 255305.46 00:23:13.246 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:13.246 Job: Nvme7n1 ended in about 0.86 seconds with error 00:23:13.246 Verification LBA range: start 0x0 length 0x400 00:23:13.246 Nvme7n1 : 0.86 224.43 14.03 74.81 0.00 187792.25 21997.30 206979.78 00:23:13.246 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:13.246 Job: Nvme8n1 ended in about 0.87 seconds with error 00:23:13.246 Verification LBA range: start 0x0 length 0x400 00:23:13.246 Nvme8n1 : 0.87 226.67 14.17 73.64 0.00 183638.58 20059.71 199685.34 00:23:13.246 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:13.246 Job: Nvme9n1 ended in about 0.86 seconds with error 00:23:13.246 Verification LBA range: start 0x0 length 0x400 00:23:13.246 Nvme9n1 : 0.86 224.10 14.01 74.70 0.00 180203.97 22111.28 232510.33 00:23:13.246 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:13.246 Job: Nvme10n1 ended in about 0.88 seconds with error 00:23:13.246 Verification LBA range: start 0x0 length 0x400 00:23:13.246 Nvme10n1 : 0.88 145.21 9.08 72.61 0.00 243425.87 29405.72 227039.50 00:23:13.246 =================================================================================================================== 00:23:13.246 Total : 1862.54 116.41 742.14 0.00 222962.50 20059.71 282659.62 00:23:13.246 [2024-07-25 01:23:35.479364] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:13.246 [2024-07-25 01:23:35.479396] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:13.246 [2024-07-25 01:23:35.479952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:13.246 [2024-07-25 01:23:35.479976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb676b0 with addr=10.0.0.2, port=4420 00:23:13.246 [2024-07-25 01:23:35.479986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb676b0 is same with the state(5) to be set 00:23:13.246 [2024-07-25 01:23:35.480495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:13.246 [2024-07-25 01:23:35.480508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce3780 with addr=10.0.0.2, port=4420 00:23:13.246 [2024-07-25 01:23:35.480515] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce3780 is same with the state(5) to be set 00:23:13.246 [2024-07-25 01:23:35.481029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:13.246 [2024-07-25 01:23:35.481040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd108d0 with addr=10.0.0.2, port=4420 00:23:13.246 [2024-07-25 01:23:35.481050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd108d0 is same with the state(5) to be set 00:23:13.246 [2024-07-25 01:23:35.481510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:13.246 [2024-07-25 01:23:35.481521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd18610 with addr=10.0.0.2, port=4420 00:23:13.246 [2024-07-25 01:23:35.481529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd18610 is same with the state(5) to be set 00:23:13.246 [2024-07-25 01:23:35.481950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:13.246 [2024-07-25 01:23:35.481961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce4630 with addr=10.0.0.2, port=4420 00:23:13.246 [2024-07-25 01:23:35.481968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4630 is same with the state(5) to be set 00:23:13.246 [2024-07-25 01:23:35.481979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:13.246 [2024-07-25 01:23:35.481986] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:13.246 [2024-07-25 01:23:35.481995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:13.246 [2024-07-25 01:23:35.482742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:13.246 [2024-07-25 01:23:35.483275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:13.246 [2024-07-25 01:23:35.483289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb88c50 with addr=10.0.0.2, port=4420 00:23:13.246 [2024-07-25 01:23:35.483297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb88c50 is same with the state(5) to be set 00:23:13.246 [2024-07-25 01:23:35.483779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:13.247 [2024-07-25 01:23:35.483792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8dc50 with addr=10.0.0.2, port=4420 00:23:13.247 [2024-07-25 01:23:35.483800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8dc50 is same with the state(5) to be set 00:23:13.247 [2024-07-25 01:23:35.484258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:13.247 [2024-07-25 01:23:35.484270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce2a60 with addr=10.0.0.2, port=4420 00:23:13.247 [2024-07-25 01:23:35.484277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2a60 is same with the state(5) to be set 00:23:13.247 [2024-07-25 01:23:35.484291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb676b0 (9): Bad file descriptor 00:23:13.247 [2024-07-25 01:23:35.484302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce3780 (9): Bad file descriptor 00:23:13.247 [2024-07-25 01:23:35.484311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd108d0 (9): Bad file descriptor 00:23:13.247 [2024-07-25 01:23:35.484323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd18610 (9): Bad file descriptor 00:23:13.247 [2024-07-25 01:23:35.484332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce4630 (9): Bad file descriptor 00:23:13.247 [2024-07-25 01:23:35.484340] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:13.247 [2024-07-25 01:23:35.484347] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:13.247 [2024-07-25 01:23:35.484355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:13.247 [2024-07-25 01:23:35.484392] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:13.247 [2024-07-25 01:23:35.484404] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:13.247 [2024-07-25 01:23:35.484414] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:13.247 [2024-07-25 01:23:35.484423] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:13.247 [2024-07-25 01:23:35.484434] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:13.247 [2024-07-25 01:23:35.484443] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:13.247 [2024-07-25 01:23:35.484493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:13.247 [2024-07-25 01:23:35.484505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb88c50 (9): Bad file descriptor 00:23:13.247 [2024-07-25 01:23:35.484516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8dc50 (9): Bad file descriptor 00:23:13.247 [2024-07-25 01:23:35.484525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce2a60 (9): Bad file descriptor 00:23:13.247 [2024-07-25 01:23:35.484533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:13.247 [2024-07-25 01:23:35.484539] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:13.247 [2024-07-25 01:23:35.484545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:13.247 [2024-07-25 01:23:35.484554] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:23:13.247 [2024-07-25 01:23:35.484561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:23:13.247 [2024-07-25 01:23:35.484568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:23:13.247 [2024-07-25 01:23:35.484577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:13.247 [2024-07-25 01:23:35.484584] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:13.247 [2024-07-25 01:23:35.484590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:13.247 [2024-07-25 01:23:35.484599] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:13.247 [2024-07-25 01:23:35.484605] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:13.247 [2024-07-25 01:23:35.484611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:13.247 [2024-07-25 01:23:35.484621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:13.247 [2024-07-25 01:23:35.484627] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:13.247 [2024-07-25 01:23:35.484637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:13.247 [2024-07-25 01:23:35.484683] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:13.247 [2024-07-25 01:23:35.484694] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:13.247 [2024-07-25 01:23:35.484701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:13.247 [2024-07-25 01:23:35.484707] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:13.247 [2024-07-25 01:23:35.484715] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:13.247 [2024-07-25 01:23:35.484722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:13.247 [2024-07-25 01:23:35.484734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:13.247 [2024-07-25 01:23:35.484741] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:13.247 [2024-07-25 01:23:35.484748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:13.247 [2024-07-25 01:23:35.484756] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:23:13.247 [2024-07-25 01:23:35.484763] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:23:13.247 [2024-07-25 01:23:35.484769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:13.247 [2024-07-25 01:23:35.484778] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:13.247 [2024-07-25 01:23:35.484783] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:13.247 [2024-07-25 01:23:35.484790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:13.247 [2024-07-25 01:23:35.484817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:13.247 [2024-07-25 01:23:35.484824] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:13.247 [2024-07-25 01:23:35.484830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:13.247 [2024-07-25 01:23:35.485310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:13.247 [2024-07-25 01:23:35.485324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x693340 with addr=10.0.0.2, port=4420 00:23:13.247 [2024-07-25 01:23:35.485331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x693340 is same with the state(5) to be set 00:23:13.247 [2024-07-25 01:23:35.485356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x693340 (9): Bad file descriptor 00:23:13.247 [2024-07-25 01:23:35.485380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:13.247 [2024-07-25 01:23:35.485387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:13.247 [2024-07-25 01:23:35.485393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:13.247 [2024-07-25 01:23:35.485417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:13.508 01:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:23:13.508 01:23:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:23:14.445 01:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 964037 00:23:14.445 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (964037) - No such process 00:23:14.445 01:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:23:14.445 01:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:23:14.445 01:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:14.445 01:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:14.445 01:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:14.445 01:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:14.445 01:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:14.445 01:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:23:14.445 01:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:14.445 01:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:23:14.445 01:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:14.445 01:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:14.445 rmmod nvme_tcp 00:23:14.445 rmmod nvme_fabrics 00:23:14.445 rmmod nvme_keyring 00:23:14.445 01:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:14.445 01:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:23:14.446 01:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:23:14.446 01:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:14.446 01:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:14.446 01:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:14.446 01:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:14.446 01:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:14.446 01:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:14.446 01:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.446 01:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:14.446 01:23:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.988 01:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:16.988 00:23:16.988 real 0m7.999s 00:23:16.988 user 0m20.232s 00:23:16.988 sys 0m1.333s 00:23:16.988 01:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:16.988 01:23:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:16.988 ************************************ 00:23:16.988 END TEST nvmf_shutdown_tc3 00:23:16.988 ************************************ 00:23:16.988 01:23:38 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:16.988 01:23:38 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:23:16.988 00:23:16.988 real 0m31.532s 00:23:16.988 user 1m20.155s 00:23:16.988 sys 0m8.408s 00:23:16.988 01:23:38 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:16.988 01:23:38 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:16.988 ************************************ 00:23:16.988 END TEST nvmf_shutdown 00:23:16.988 ************************************ 00:23:16.988 01:23:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:16.988 01:23:39 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:23:16.988 01:23:39 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:16.988 01:23:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:16.988 01:23:39 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:23:16.988 01:23:39 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:16.988 01:23:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:16.988 01:23:39 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:23:16.988 01:23:39 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:16.988 01:23:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:16.988 01:23:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:16.988 01:23:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:16.988 ************************************ 00:23:16.988 START TEST nvmf_multicontroller 00:23:16.988 ************************************ 00:23:16.988 01:23:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:16.988 * Looking for test storage... 00:23:16.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:16.988 01:23:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:16.988 01:23:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:16.988 01:23:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:16.988 01:23:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:16.988 01:23:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:16.988 01:23:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:23:16.989 01:23:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:22.275 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:22.275 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:23:22.275 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:22.275 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:22.275 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:22.275 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:22.275 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:22.275 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:23:22.275 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:22.275 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:23:22.275 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:23:22.275 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:23:22.275 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:23:22.275 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:23:22.275 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:23:22.275 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:22.275 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:22.275 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:22.275 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:22.275 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:22.275 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:22.275 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:22.275 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:22.275 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:22.275 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:22.275 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:22.275 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:22.275 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:22.275 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:22.275 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:22.275 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:22.275 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:22.276 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:22.276 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:22.276 Found net devices under 0000:86:00.0: cvl_0_0 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:22.276 Found net devices under 0000:86:00.1: cvl_0_1 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:22.276 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:22.578 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:22.578 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:22.578 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:22.578 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:22.578 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:22.578 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:22.578 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:22.578 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:22.578 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:23:22.578 00:23:22.578 --- 10.0.0.2 ping statistics --- 00:23:22.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.578 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:23:22.578 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:22.578 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:22.578 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:23:22.578 00:23:22.578 --- 10.0.0.1 ping statistics --- 00:23:22.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.578 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:23:22.578 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:22.578 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:23:22.578 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:22.578 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:22.578 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:22.578 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:22.578 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:22.578 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:22.578 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:22.578 01:23:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:22.578 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:22.578 01:23:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:22.578 01:23:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:22.578 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=968212 00:23:22.578 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 968212 00:23:22.578 01:23:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:22.578 01:23:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 968212 ']' 00:23:22.578 01:23:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:22.578 01:23:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:22.578 01:23:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:22.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:22.579 01:23:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:22.579 01:23:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:22.579 [2024-07-25 01:23:45.009765] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:23:22.579 [2024-07-25 01:23:45.009810] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:22.871 EAL: No free 2048 kB hugepages reported on node 1 00:23:22.871 [2024-07-25 01:23:45.069194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:22.871 [2024-07-25 01:23:45.145196] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:22.871 [2024-07-25 01:23:45.145236] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:22.871 [2024-07-25 01:23:45.145246] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:22.871 [2024-07-25 01:23:45.145252] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:22.871 [2024-07-25 01:23:45.145257] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:22.871 [2024-07-25 01:23:45.145356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:22.871 [2024-07-25 01:23:45.145443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:22.871 [2024-07-25 01:23:45.145444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:23.441 01:23:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:23.441 01:23:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:23:23.441 01:23:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:23.441 01:23:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:23.441 01:23:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.441 01:23:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:23.441 01:23:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:23.441 01:23:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.441 01:23:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.441 [2024-07-25 01:23:45.861740] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:23.441 01:23:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.441 01:23:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:23.442 01:23:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.442 01:23:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.442 Malloc0 00:23:23.442 01:23:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.442 01:23:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:23.442 01:23:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.442 01:23:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.442 01:23:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.442 01:23:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:23.442 01:23:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.442 01:23:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.442 01:23:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.442 01:23:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:23.442 01:23:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.442 01:23:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.442 [2024-07-25 01:23:45.922875] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:23.442 01:23:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.442 01:23:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:23.442 01:23:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.442 01:23:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.702 [2024-07-25 01:23:45.934818] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:23.702 01:23:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.702 01:23:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:23.702 01:23:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.702 01:23:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.702 Malloc1 00:23:23.702 01:23:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.702 01:23:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:23.702 01:23:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.702 01:23:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.702 01:23:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.702 01:23:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:23.702 01:23:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.702 01:23:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.702 01:23:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.702 01:23:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:23.702 01:23:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.702 01:23:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.702 01:23:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.702 01:23:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:23.702 01:23:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.702 01:23:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.702 01:23:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.702 01:23:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=968334 00:23:23.702 01:23:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:23.702 01:23:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:23.702 01:23:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 968334 /var/tmp/bdevperf.sock 00:23:23.702 01:23:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 968334 ']' 00:23:23.702 01:23:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:23.702 01:23:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:23.702 01:23:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:23.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:23.702 01:23:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:23.702 01:23:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.643 01:23:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:24.643 01:23:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:23:24.643 01:23:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:24.643 01:23:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.643 01:23:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.643 NVMe0n1 00:23:24.643 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.643 01:23:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:24.643 01:23:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:24.643 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.643 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.643 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.643 1 00:23:24.643 01:23:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:24.643 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:24.643 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:24.643 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:24.643 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:24.643 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:24.643 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:24.643 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:24.643 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.643 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.643 request: 00:23:24.643 { 00:23:24.643 "name": "NVMe0", 00:23:24.643 "trtype": "tcp", 00:23:24.643 "traddr": "10.0.0.2", 00:23:24.643 "adrfam": "ipv4", 00:23:24.643 "trsvcid": "4420", 00:23:24.643 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:24.643 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:24.643 "hostaddr": "10.0.0.2", 00:23:24.643 "hostsvcid": "60000", 00:23:24.643 "prchk_reftag": false, 00:23:24.643 "prchk_guard": false, 00:23:24.643 "hdgst": false, 00:23:24.643 "ddgst": false, 00:23:24.643 "method": "bdev_nvme_attach_controller", 00:23:24.643 "req_id": 1 00:23:24.643 } 00:23:24.643 Got JSON-RPC error response 00:23:24.643 response: 00:23:24.643 { 00:23:24.643 "code": -114, 00:23:24.643 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:24.643 } 00:23:24.643 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:24.643 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:24.643 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:24.643 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:24.643 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:24.643 01:23:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:24.643 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:24.643 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:24.643 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:24.643 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:24.643 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:24.643 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:24.643 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:24.643 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.643 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.643 request: 00:23:24.643 { 00:23:24.643 "name": "NVMe0", 00:23:24.643 "trtype": "tcp", 00:23:24.643 "traddr": "10.0.0.2", 00:23:24.643 "adrfam": "ipv4", 00:23:24.643 "trsvcid": "4420", 00:23:24.643 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:24.643 "hostaddr": "10.0.0.2", 00:23:24.643 "hostsvcid": "60000", 00:23:24.643 "prchk_reftag": false, 00:23:24.643 "prchk_guard": false, 00:23:24.643 "hdgst": false, 00:23:24.643 "ddgst": false, 00:23:24.643 "method": "bdev_nvme_attach_controller", 00:23:24.643 "req_id": 1 00:23:24.643 } 00:23:24.643 Got JSON-RPC error response 00:23:24.643 response: 00:23:24.643 { 00:23:24.643 "code": -114, 00:23:24.643 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:24.643 } 00:23:24.643 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:24.643 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:24.643 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:24.643 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:24.643 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:24.643 01:23:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:24.643 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:24.643 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:24.643 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:24.643 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:24.643 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:24.643 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:24.644 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:24.644 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.644 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.644 request: 00:23:24.644 { 00:23:24.644 "name": "NVMe0", 00:23:24.644 "trtype": "tcp", 00:23:24.644 "traddr": "10.0.0.2", 00:23:24.644 "adrfam": "ipv4", 00:23:24.644 "trsvcid": "4420", 00:23:24.644 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:24.644 "hostaddr": "10.0.0.2", 00:23:24.644 "hostsvcid": "60000", 00:23:24.644 "prchk_reftag": false, 00:23:24.644 "prchk_guard": false, 00:23:24.644 "hdgst": false, 00:23:24.644 "ddgst": false, 00:23:24.644 "multipath": "disable", 00:23:24.644 "method": "bdev_nvme_attach_controller", 00:23:24.644 "req_id": 1 00:23:24.644 } 00:23:24.644 Got JSON-RPC error response 00:23:24.644 response: 00:23:24.644 { 00:23:24.644 "code": -114, 00:23:24.644 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:23:24.644 } 00:23:24.644 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:24.644 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:24.644 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:24.644 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:24.644 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:24.644 01:23:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:24.644 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:24.644 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:24.644 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:24.644 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:24.644 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:24.644 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:24.644 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:24.644 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.644 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.904 request: 00:23:24.904 { 00:23:24.904 "name": "NVMe0", 00:23:24.904 "trtype": "tcp", 00:23:24.904 "traddr": "10.0.0.2", 00:23:24.904 "adrfam": "ipv4", 00:23:24.904 "trsvcid": "4420", 00:23:24.904 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:24.904 "hostaddr": "10.0.0.2", 00:23:24.904 "hostsvcid": "60000", 00:23:24.904 "prchk_reftag": false, 00:23:24.904 "prchk_guard": false, 00:23:24.904 "hdgst": false, 00:23:24.904 "ddgst": false, 00:23:24.904 "multipath": "failover", 00:23:24.904 "method": "bdev_nvme_attach_controller", 00:23:24.904 "req_id": 1 00:23:24.904 } 00:23:24.904 Got JSON-RPC error response 00:23:24.904 response: 00:23:24.904 { 00:23:24.904 "code": -114, 00:23:24.904 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:24.904 } 00:23:24.904 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:24.904 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:24.904 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:24.904 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:24.904 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:24.905 01:23:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:24.905 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.905 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.905 00:23:24.905 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.905 01:23:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:24.905 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.905 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.165 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.165 01:23:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:25.165 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.165 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.165 00:23:25.165 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.165 01:23:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:25.165 01:23:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:25.165 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.165 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.165 01:23:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.165 01:23:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:25.165 01:23:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:26.547 0 00:23:26.547 01:23:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:26.547 01:23:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.547 01:23:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.547 01:23:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.547 01:23:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 968334 00:23:26.547 01:23:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 968334 ']' 00:23:26.547 01:23:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 968334 00:23:26.547 01:23:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:23:26.547 01:23:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:26.547 01:23:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 968334 00:23:26.547 01:23:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:26.547 01:23:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:26.547 01:23:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 968334' 00:23:26.547 killing process with pid 968334 00:23:26.547 01:23:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 968334 00:23:26.547 01:23:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 968334 00:23:26.547 01:23:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:26.547 01:23:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.547 01:23:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.547 01:23:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.547 01:23:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:26.547 01:23:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.547 01:23:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.547 01:23:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.547 01:23:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:23:26.547 01:23:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:26.547 01:23:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:26.547 01:23:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:26.547 01:23:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:23:26.547 01:23:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:23:26.547 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:26.547 [2024-07-25 01:23:46.038235] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:23:26.547 [2024-07-25 01:23:46.038286] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid968334 ] 00:23:26.547 EAL: No free 2048 kB hugepages reported on node 1 00:23:26.547 [2024-07-25 01:23:46.092789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.547 [2024-07-25 01:23:46.173323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.547 [2024-07-25 01:23:47.627155] bdev.c:4610:bdev_name_add: *ERROR*: Bdev name aaead1d1-668e-4ec4-a16d-19f7fda1860b already exists 00:23:26.547 [2024-07-25 01:23:47.627185] bdev.c:7719:bdev_register: *ERROR*: Unable to add uuid:aaead1d1-668e-4ec4-a16d-19f7fda1860b alias for bdev NVMe1n1 00:23:26.547 [2024-07-25 01:23:47.627193] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:26.547 Running I/O for 1 seconds... 00:23:26.547 00:23:26.547 Latency(us) 00:23:26.547 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.547 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:26.547 NVMe0n1 : 1.01 22795.50 89.04 0.00 0.00 5596.23 4103.12 26784.28 00:23:26.547 =================================================================================================================== 00:23:26.547 Total : 22795.50 89.04 0.00 0.00 5596.23 4103.12 26784.28 00:23:26.547 Received shutdown signal, test time was about 1.000000 seconds 00:23:26.547 00:23:26.547 Latency(us) 00:23:26.547 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.547 =================================================================================================================== 00:23:26.547 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:26.547 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:26.547 01:23:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:26.547 01:23:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:26.548 01:23:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:23:26.548 01:23:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:26.548 01:23:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:23:26.808 01:23:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:26.808 01:23:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:23:26.808 01:23:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:26.808 01:23:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:26.808 rmmod nvme_tcp 00:23:26.808 rmmod nvme_fabrics 00:23:26.808 rmmod nvme_keyring 00:23:26.808 01:23:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:26.808 01:23:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:23:26.808 01:23:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:23:26.808 01:23:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 968212 ']' 00:23:26.808 01:23:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 968212 00:23:26.808 01:23:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 968212 ']' 00:23:26.808 01:23:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 968212 00:23:26.808 01:23:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:23:26.808 01:23:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:26.808 01:23:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 968212 00:23:26.808 01:23:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:26.808 01:23:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:26.808 01:23:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 968212' 00:23:26.808 killing process with pid 968212 00:23:26.808 01:23:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 968212 00:23:26.808 01:23:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 968212 00:23:27.068 01:23:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:27.068 01:23:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:27.068 01:23:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:27.068 01:23:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:27.068 01:23:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:27.068 01:23:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.068 01:23:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:27.068 01:23:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:28.976 01:23:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:28.976 00:23:28.976 real 0m12.335s 00:23:28.976 user 0m17.468s 00:23:28.976 sys 0m5.089s 00:23:28.976 01:23:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:28.976 01:23:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:28.976 ************************************ 00:23:28.976 END TEST nvmf_multicontroller 00:23:28.976 ************************************ 00:23:29.236 01:23:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:29.236 01:23:51 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:29.236 01:23:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:29.236 01:23:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:29.236 01:23:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:29.236 ************************************ 00:23:29.236 START TEST nvmf_aer 00:23:29.236 ************************************ 00:23:29.236 01:23:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:29.236 * Looking for test storage... 00:23:29.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:29.236 01:23:51 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:29.236 01:23:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:29.236 01:23:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:29.236 01:23:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:29.236 01:23:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:29.236 01:23:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:29.236 01:23:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:29.236 01:23:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:29.236 01:23:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:29.236 01:23:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:29.236 01:23:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:29.236 01:23:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:29.236 01:23:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:29.236 01:23:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:29.236 01:23:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:29.236 01:23:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:29.236 01:23:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:29.236 01:23:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:29.236 01:23:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:29.236 01:23:51 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:29.236 01:23:51 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:29.236 01:23:51 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:29.237 01:23:51 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.237 01:23:51 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.237 01:23:51 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.237 01:23:51 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:29.237 01:23:51 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.237 01:23:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:23:29.237 01:23:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:29.237 01:23:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:29.237 01:23:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:29.237 01:23:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:29.237 01:23:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:29.237 01:23:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:29.237 01:23:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:29.237 01:23:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:29.237 01:23:51 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:29.237 01:23:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:29.237 01:23:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:29.237 01:23:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:29.237 01:23:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:29.237 01:23:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:29.237 01:23:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.237 01:23:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:29.237 01:23:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.237 01:23:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:29.237 01:23:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:29.237 01:23:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:23:29.237 01:23:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:34.518 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:34.518 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:23:34.518 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:34.518 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:34.518 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:34.518 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:34.518 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:34.518 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:23:34.518 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:34.518 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:23:34.518 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:23:34.518 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:23:34.518 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:23:34.518 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:23:34.518 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:23:34.518 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:34.518 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:34.518 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:34.518 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:34.518 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:34.518 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:34.518 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:34.518 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:34.518 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:34.518 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:34.518 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:34.518 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:34.518 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:34.518 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:34.518 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:34.518 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:34.519 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:34.519 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:34.519 Found net devices under 0000:86:00.0: cvl_0_0 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:34.519 Found net devices under 0000:86:00.1: cvl_0_1 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:34.519 01:23:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:34.780 01:23:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:34.780 01:23:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:34.780 01:23:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:34.780 01:23:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:34.780 01:23:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:34.780 01:23:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:34.780 01:23:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:34.780 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:34.780 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:23:34.780 00:23:34.780 --- 10.0.0.2 ping statistics --- 00:23:34.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.780 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:23:34.780 01:23:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:34.780 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:34.780 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:23:34.780 00:23:34.780 --- 10.0.0.1 ping statistics --- 00:23:34.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.780 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:23:34.780 01:23:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:34.780 01:23:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:23:34.780 01:23:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:34.780 01:23:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:34.780 01:23:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:34.780 01:23:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:34.780 01:23:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:34.780 01:23:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:34.780 01:23:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:34.780 01:23:57 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:34.780 01:23:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:34.780 01:23:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:34.780 01:23:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:34.780 01:23:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=972323 00:23:34.780 01:23:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 972323 00:23:34.780 01:23:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:34.780 01:23:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 972323 ']' 00:23:34.780 01:23:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.780 01:23:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:34.780 01:23:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.780 01:23:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:34.780 01:23:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:34.780 [2024-07-25 01:23:57.252536] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:23:34.780 [2024-07-25 01:23:57.252586] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:35.041 EAL: No free 2048 kB hugepages reported on node 1 00:23:35.041 [2024-07-25 01:23:57.312785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:35.041 [2024-07-25 01:23:57.387835] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:35.041 [2024-07-25 01:23:57.387877] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:35.041 [2024-07-25 01:23:57.387884] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:35.041 [2024-07-25 01:23:57.387890] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:35.041 [2024-07-25 01:23:57.387895] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:35.041 [2024-07-25 01:23:57.387943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.041 [2024-07-25 01:23:57.388056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:35.041 [2024-07-25 01:23:57.388109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:35.041 [2024-07-25 01:23:57.388111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.612 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:35.612 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:23:35.612 01:23:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:35.612 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:35.612 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:35.612 01:23:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:35.612 01:23:58 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:35.612 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.612 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:35.612 [2024-07-25 01:23:58.101103] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:35.872 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.872 01:23:58 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:35.872 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.872 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:35.872 Malloc0 00:23:35.872 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.872 01:23:58 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:35.872 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.872 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:35.872 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.872 01:23:58 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:35.872 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.872 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:35.872 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.872 01:23:58 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:35.872 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.872 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:35.872 [2024-07-25 01:23:58.152693] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:35.872 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.872 01:23:58 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:35.872 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.872 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:35.872 [ 00:23:35.872 { 00:23:35.872 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:35.872 "subtype": "Discovery", 00:23:35.872 "listen_addresses": [], 00:23:35.872 "allow_any_host": true, 00:23:35.872 "hosts": [] 00:23:35.872 }, 00:23:35.872 { 00:23:35.872 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.872 "subtype": "NVMe", 00:23:35.872 "listen_addresses": [ 00:23:35.872 { 00:23:35.872 "trtype": "TCP", 00:23:35.872 "adrfam": "IPv4", 00:23:35.872 "traddr": "10.0.0.2", 00:23:35.872 "trsvcid": "4420" 00:23:35.872 } 00:23:35.872 ], 00:23:35.872 "allow_any_host": true, 00:23:35.872 "hosts": [], 00:23:35.872 "serial_number": "SPDK00000000000001", 00:23:35.872 "model_number": "SPDK bdev Controller", 00:23:35.872 "max_namespaces": 2, 00:23:35.872 "min_cntlid": 1, 00:23:35.872 "max_cntlid": 65519, 00:23:35.872 "namespaces": [ 00:23:35.872 { 00:23:35.872 "nsid": 1, 00:23:35.872 "bdev_name": "Malloc0", 00:23:35.872 "name": "Malloc0", 00:23:35.872 "nguid": "206543100F654E548B82835679BFC977", 00:23:35.872 "uuid": "20654310-0f65-4e54-8b82-835679bfc977" 00:23:35.872 } 00:23:35.872 ] 00:23:35.872 } 00:23:35.872 ] 00:23:35.872 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.872 01:23:58 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:35.872 01:23:58 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:35.872 01:23:58 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=972572 00:23:35.872 01:23:58 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:35.872 01:23:58 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:35.872 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:23:35.872 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:35.872 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:23:35.872 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:23:35.872 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:35.872 EAL: No free 2048 kB hugepages reported on node 1 00:23:35.872 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:35.872 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:23:35.872 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:23:35.872 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:36.133 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:36.133 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:36.133 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:23:36.133 01:23:58 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:36.133 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.133 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:36.133 Malloc1 00:23:36.133 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.133 01:23:58 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:36.133 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.133 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:36.133 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.133 01:23:58 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:36.133 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.133 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:36.133 Asynchronous Event Request test 00:23:36.133 Attaching to 10.0.0.2 00:23:36.133 Attached to 10.0.0.2 00:23:36.133 Registering asynchronous event callbacks... 00:23:36.133 Starting namespace attribute notice tests for all controllers... 00:23:36.133 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:36.133 aer_cb - Changed Namespace 00:23:36.133 Cleaning up... 00:23:36.133 [ 00:23:36.133 { 00:23:36.133 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:36.133 "subtype": "Discovery", 00:23:36.133 "listen_addresses": [], 00:23:36.133 "allow_any_host": true, 00:23:36.133 "hosts": [] 00:23:36.133 }, 00:23:36.133 { 00:23:36.133 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.133 "subtype": "NVMe", 00:23:36.133 "listen_addresses": [ 00:23:36.133 { 00:23:36.133 "trtype": "TCP", 00:23:36.133 "adrfam": "IPv4", 00:23:36.133 "traddr": "10.0.0.2", 00:23:36.133 "trsvcid": "4420" 00:23:36.133 } 00:23:36.133 ], 00:23:36.133 "allow_any_host": true, 00:23:36.133 "hosts": [], 00:23:36.133 "serial_number": "SPDK00000000000001", 00:23:36.133 "model_number": "SPDK bdev Controller", 00:23:36.133 "max_namespaces": 2, 00:23:36.133 "min_cntlid": 1, 00:23:36.133 "max_cntlid": 65519, 00:23:36.133 "namespaces": [ 00:23:36.133 { 00:23:36.133 "nsid": 1, 00:23:36.133 "bdev_name": "Malloc0", 00:23:36.133 "name": "Malloc0", 00:23:36.133 "nguid": "206543100F654E548B82835679BFC977", 00:23:36.133 "uuid": "20654310-0f65-4e54-8b82-835679bfc977" 00:23:36.133 }, 00:23:36.133 { 00:23:36.133 "nsid": 2, 00:23:36.133 "bdev_name": "Malloc1", 00:23:36.133 "name": "Malloc1", 00:23:36.133 "nguid": "E1AF7746CB514FFDA641239603363706", 00:23:36.134 "uuid": "e1af7746-cb51-4ffd-a641-239603363706" 00:23:36.134 } 00:23:36.134 ] 00:23:36.134 } 00:23:36.134 ] 00:23:36.134 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.134 01:23:58 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 972572 00:23:36.134 01:23:58 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:36.134 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.134 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:36.134 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.134 01:23:58 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:36.134 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.134 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:36.134 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.134 01:23:58 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:36.134 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.134 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:36.134 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.134 01:23:58 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:36.134 01:23:58 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:36.134 01:23:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:36.134 01:23:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:23:36.134 01:23:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:36.134 01:23:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:23:36.134 01:23:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:36.134 01:23:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:36.134 rmmod nvme_tcp 00:23:36.134 rmmod nvme_fabrics 00:23:36.134 rmmod nvme_keyring 00:23:36.134 01:23:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:36.134 01:23:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:23:36.134 01:23:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:23:36.134 01:23:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 972323 ']' 00:23:36.134 01:23:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 972323 00:23:36.134 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 972323 ']' 00:23:36.134 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 972323 00:23:36.134 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:23:36.134 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:36.134 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 972323 00:23:36.134 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:36.134 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:36.134 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 972323' 00:23:36.134 killing process with pid 972323 00:23:36.134 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 972323 00:23:36.134 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 972323 00:23:36.402 01:23:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:36.402 01:23:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:36.402 01:23:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:36.402 01:23:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:36.402 01:23:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:36.402 01:23:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.402 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:36.402 01:23:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.947 01:24:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:38.947 00:23:38.947 real 0m9.328s 00:23:38.947 user 0m7.131s 00:23:38.947 sys 0m4.628s 00:23:38.947 01:24:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:38.947 01:24:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:38.947 ************************************ 00:23:38.947 END TEST nvmf_aer 00:23:38.947 ************************************ 00:23:38.947 01:24:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:38.947 01:24:00 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:38.947 01:24:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:38.947 01:24:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:38.947 01:24:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:38.947 ************************************ 00:23:38.947 START TEST nvmf_async_init 00:23:38.947 ************************************ 00:23:38.947 01:24:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:38.947 * Looking for test storage... 00:23:38.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=888157df02374c4a8a654cbd00dda860 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:38.947 01:24:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.948 01:24:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:38.948 01:24:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:38.948 01:24:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:23:38.948 01:24:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:44.233 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:44.233 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:44.233 Found net devices under 0000:86:00.0: cvl_0_0 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:44.233 Found net devices under 0000:86:00.1: cvl_0_1 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:44.233 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:44.233 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:23:44.233 00:23:44.233 --- 10.0.0.2 ping statistics --- 00:23:44.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.233 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:44.233 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:44.233 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:23:44.233 00:23:44.233 --- 10.0.0.1 ping statistics --- 00:23:44.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.233 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:44.233 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:44.234 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:44.234 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:44.234 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:44.234 01:24:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:44.234 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:44.234 01:24:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:44.234 01:24:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.234 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=976012 00:23:44.234 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 976012 00:23:44.234 01:24:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 976012 ']' 00:23:44.234 01:24:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:44.234 01:24:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:44.234 01:24:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:44.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:44.234 01:24:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:44.234 01:24:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:44.234 01:24:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.234 [2024-07-25 01:24:06.023710] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:23:44.234 [2024-07-25 01:24:06.023757] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:44.234 EAL: No free 2048 kB hugepages reported on node 1 00:23:44.234 [2024-07-25 01:24:06.079952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.234 [2024-07-25 01:24:06.166177] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:44.234 [2024-07-25 01:24:06.166210] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:44.234 [2024-07-25 01:24:06.166217] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:44.234 [2024-07-25 01:24:06.166223] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:44.234 [2024-07-25 01:24:06.166228] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:44.234 [2024-07-25 01:24:06.166245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.494 01:24:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:44.494 01:24:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:23:44.494 01:24:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:44.494 01:24:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:44.494 01:24:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.494 01:24:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:44.494 01:24:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:44.494 01:24:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.494 01:24:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.494 [2024-07-25 01:24:06.868791] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:44.494 01:24:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.494 01:24:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:44.494 01:24:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.494 01:24:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.494 null0 00:23:44.494 01:24:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.494 01:24:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:44.494 01:24:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.494 01:24:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.494 01:24:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.494 01:24:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:44.494 01:24:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.494 01:24:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.494 01:24:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.494 01:24:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 888157df02374c4a8a654cbd00dda860 00:23:44.494 01:24:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.494 01:24:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.494 01:24:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.494 01:24:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:44.494 01:24:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.494 01:24:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.494 [2024-07-25 01:24:06.908969] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:44.494 01:24:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.494 01:24:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:44.494 01:24:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.494 01:24:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.754 nvme0n1 00:23:44.754 01:24:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.754 01:24:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:44.754 01:24:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.754 01:24:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.754 [ 00:23:44.754 { 00:23:44.754 "name": "nvme0n1", 00:23:44.754 "aliases": [ 00:23:44.754 "888157df-0237-4c4a-8a65-4cbd00dda860" 00:23:44.754 ], 00:23:44.754 "product_name": "NVMe disk", 00:23:44.754 "block_size": 512, 00:23:44.754 "num_blocks": 2097152, 00:23:44.754 "uuid": "888157df-0237-4c4a-8a65-4cbd00dda860", 00:23:44.754 "assigned_rate_limits": { 00:23:44.754 "rw_ios_per_sec": 0, 00:23:44.754 "rw_mbytes_per_sec": 0, 00:23:44.754 "r_mbytes_per_sec": 0, 00:23:44.754 "w_mbytes_per_sec": 0 00:23:44.754 }, 00:23:44.754 "claimed": false, 00:23:44.754 "zoned": false, 00:23:44.754 "supported_io_types": { 00:23:44.754 "read": true, 00:23:44.754 "write": true, 00:23:44.754 "unmap": false, 00:23:44.754 "flush": true, 00:23:44.754 "reset": true, 00:23:44.754 "nvme_admin": true, 00:23:44.754 "nvme_io": true, 00:23:44.754 "nvme_io_md": false, 00:23:44.754 "write_zeroes": true, 00:23:44.754 "zcopy": false, 00:23:44.754 "get_zone_info": false, 00:23:44.754 "zone_management": false, 00:23:44.754 "zone_append": false, 00:23:44.754 "compare": true, 00:23:44.754 "compare_and_write": true, 00:23:44.754 "abort": true, 00:23:44.754 "seek_hole": false, 00:23:44.754 "seek_data": false, 00:23:44.754 "copy": true, 00:23:44.754 "nvme_iov_md": false 00:23:44.754 }, 00:23:44.754 "memory_domains": [ 00:23:44.754 { 00:23:44.754 "dma_device_id": "system", 00:23:44.754 "dma_device_type": 1 00:23:44.754 } 00:23:44.754 ], 00:23:44.754 "driver_specific": { 00:23:44.754 "nvme": [ 00:23:44.754 { 00:23:44.754 "trid": { 00:23:44.754 "trtype": "TCP", 00:23:44.754 "adrfam": "IPv4", 00:23:44.754 "traddr": "10.0.0.2", 00:23:44.754 "trsvcid": "4420", 00:23:44.754 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:44.754 }, 00:23:44.754 "ctrlr_data": { 00:23:44.754 "cntlid": 1, 00:23:44.754 "vendor_id": "0x8086", 00:23:44.754 "model_number": "SPDK bdev Controller", 00:23:44.754 "serial_number": "00000000000000000000", 00:23:44.754 "firmware_revision": "24.09", 00:23:44.754 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:44.754 "oacs": { 00:23:44.754 "security": 0, 00:23:44.754 "format": 0, 00:23:44.754 "firmware": 0, 00:23:44.754 "ns_manage": 0 00:23:44.754 }, 00:23:44.754 "multi_ctrlr": true, 00:23:44.754 "ana_reporting": false 00:23:44.754 }, 00:23:44.754 "vs": { 00:23:44.754 "nvme_version": "1.3" 00:23:44.754 }, 00:23:44.754 "ns_data": { 00:23:44.754 "id": 1, 00:23:44.754 "can_share": true 00:23:44.754 } 00:23:44.754 } 00:23:44.754 ], 00:23:44.754 "mp_policy": "active_passive" 00:23:44.754 } 00:23:44.754 } 00:23:44.754 ] 00:23:44.754 01:24:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.754 01:24:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:44.754 01:24:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.754 01:24:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.754 [2024-07-25 01:24:07.157523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:44.754 [2024-07-25 01:24:07.157576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eb390 (9): Bad file descriptor 00:23:45.057 [2024-07-25 01:24:07.289133] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:45.057 01:24:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.057 01:24:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:45.057 01:24:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.057 01:24:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.057 [ 00:23:45.057 { 00:23:45.057 "name": "nvme0n1", 00:23:45.057 "aliases": [ 00:23:45.057 "888157df-0237-4c4a-8a65-4cbd00dda860" 00:23:45.057 ], 00:23:45.057 "product_name": "NVMe disk", 00:23:45.057 "block_size": 512, 00:23:45.057 "num_blocks": 2097152, 00:23:45.057 "uuid": "888157df-0237-4c4a-8a65-4cbd00dda860", 00:23:45.057 "assigned_rate_limits": { 00:23:45.057 "rw_ios_per_sec": 0, 00:23:45.057 "rw_mbytes_per_sec": 0, 00:23:45.057 "r_mbytes_per_sec": 0, 00:23:45.057 "w_mbytes_per_sec": 0 00:23:45.057 }, 00:23:45.057 "claimed": false, 00:23:45.057 "zoned": false, 00:23:45.057 "supported_io_types": { 00:23:45.057 "read": true, 00:23:45.057 "write": true, 00:23:45.057 "unmap": false, 00:23:45.057 "flush": true, 00:23:45.057 "reset": true, 00:23:45.057 "nvme_admin": true, 00:23:45.057 "nvme_io": true, 00:23:45.057 "nvme_io_md": false, 00:23:45.057 "write_zeroes": true, 00:23:45.057 "zcopy": false, 00:23:45.057 "get_zone_info": false, 00:23:45.057 "zone_management": false, 00:23:45.057 "zone_append": false, 00:23:45.057 "compare": true, 00:23:45.057 "compare_and_write": true, 00:23:45.057 "abort": true, 00:23:45.057 "seek_hole": false, 00:23:45.057 "seek_data": false, 00:23:45.057 "copy": true, 00:23:45.057 "nvme_iov_md": false 00:23:45.057 }, 00:23:45.057 "memory_domains": [ 00:23:45.057 { 00:23:45.057 "dma_device_id": "system", 00:23:45.057 "dma_device_type": 1 00:23:45.057 } 00:23:45.057 ], 00:23:45.057 "driver_specific": { 00:23:45.057 "nvme": [ 00:23:45.057 { 00:23:45.057 "trid": { 00:23:45.057 "trtype": "TCP", 00:23:45.057 "adrfam": "IPv4", 00:23:45.057 "traddr": "10.0.0.2", 00:23:45.057 "trsvcid": "4420", 00:23:45.057 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:45.057 }, 00:23:45.057 "ctrlr_data": { 00:23:45.057 "cntlid": 2, 00:23:45.057 "vendor_id": "0x8086", 00:23:45.057 "model_number": "SPDK bdev Controller", 00:23:45.057 "serial_number": "00000000000000000000", 00:23:45.057 "firmware_revision": "24.09", 00:23:45.057 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:45.057 "oacs": { 00:23:45.057 "security": 0, 00:23:45.057 "format": 0, 00:23:45.057 "firmware": 0, 00:23:45.057 "ns_manage": 0 00:23:45.057 }, 00:23:45.057 "multi_ctrlr": true, 00:23:45.057 "ana_reporting": false 00:23:45.057 }, 00:23:45.057 "vs": { 00:23:45.057 "nvme_version": "1.3" 00:23:45.057 }, 00:23:45.057 "ns_data": { 00:23:45.057 "id": 1, 00:23:45.057 "can_share": true 00:23:45.057 } 00:23:45.057 } 00:23:45.057 ], 00:23:45.057 "mp_policy": "active_passive" 00:23:45.057 } 00:23:45.057 } 00:23:45.057 ] 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.T381egf02h 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.T381egf02h 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.058 [2024-07-25 01:24:07.338084] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:45.058 [2024-07-25 01:24:07.338184] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.T381egf02h 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.058 [2024-07-25 01:24:07.346097] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.T381egf02h 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.058 [2024-07-25 01:24:07.354129] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:45.058 [2024-07-25 01:24:07.354163] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:45.058 nvme0n1 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.058 [ 00:23:45.058 { 00:23:45.058 "name": "nvme0n1", 00:23:45.058 "aliases": [ 00:23:45.058 "888157df-0237-4c4a-8a65-4cbd00dda860" 00:23:45.058 ], 00:23:45.058 "product_name": "NVMe disk", 00:23:45.058 "block_size": 512, 00:23:45.058 "num_blocks": 2097152, 00:23:45.058 "uuid": "888157df-0237-4c4a-8a65-4cbd00dda860", 00:23:45.058 "assigned_rate_limits": { 00:23:45.058 "rw_ios_per_sec": 0, 00:23:45.058 "rw_mbytes_per_sec": 0, 00:23:45.058 "r_mbytes_per_sec": 0, 00:23:45.058 "w_mbytes_per_sec": 0 00:23:45.058 }, 00:23:45.058 "claimed": false, 00:23:45.058 "zoned": false, 00:23:45.058 "supported_io_types": { 00:23:45.058 "read": true, 00:23:45.058 "write": true, 00:23:45.058 "unmap": false, 00:23:45.058 "flush": true, 00:23:45.058 "reset": true, 00:23:45.058 "nvme_admin": true, 00:23:45.058 "nvme_io": true, 00:23:45.058 "nvme_io_md": false, 00:23:45.058 "write_zeroes": true, 00:23:45.058 "zcopy": false, 00:23:45.058 "get_zone_info": false, 00:23:45.058 "zone_management": false, 00:23:45.058 "zone_append": false, 00:23:45.058 "compare": true, 00:23:45.058 "compare_and_write": true, 00:23:45.058 "abort": true, 00:23:45.058 "seek_hole": false, 00:23:45.058 "seek_data": false, 00:23:45.058 "copy": true, 00:23:45.058 "nvme_iov_md": false 00:23:45.058 }, 00:23:45.058 "memory_domains": [ 00:23:45.058 { 00:23:45.058 "dma_device_id": "system", 00:23:45.058 "dma_device_type": 1 00:23:45.058 } 00:23:45.058 ], 00:23:45.058 "driver_specific": { 00:23:45.058 "nvme": [ 00:23:45.058 { 00:23:45.058 "trid": { 00:23:45.058 "trtype": "TCP", 00:23:45.058 "adrfam": "IPv4", 00:23:45.058 "traddr": "10.0.0.2", 00:23:45.058 "trsvcid": "4421", 00:23:45.058 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:45.058 }, 00:23:45.058 "ctrlr_data": { 00:23:45.058 "cntlid": 3, 00:23:45.058 "vendor_id": "0x8086", 00:23:45.058 "model_number": "SPDK bdev Controller", 00:23:45.058 "serial_number": "00000000000000000000", 00:23:45.058 "firmware_revision": "24.09", 00:23:45.058 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:45.058 "oacs": { 00:23:45.058 "security": 0, 00:23:45.058 "format": 0, 00:23:45.058 "firmware": 0, 00:23:45.058 "ns_manage": 0 00:23:45.058 }, 00:23:45.058 "multi_ctrlr": true, 00:23:45.058 "ana_reporting": false 00:23:45.058 }, 00:23:45.058 "vs": { 00:23:45.058 "nvme_version": "1.3" 00:23:45.058 }, 00:23:45.058 "ns_data": { 00:23:45.058 "id": 1, 00:23:45.058 "can_share": true 00:23:45.058 } 00:23:45.058 } 00:23:45.058 ], 00:23:45.058 "mp_policy": "active_passive" 00:23:45.058 } 00:23:45.058 } 00:23:45.058 ] 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.T381egf02h 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:45.058 rmmod nvme_tcp 00:23:45.058 rmmod nvme_fabrics 00:23:45.058 rmmod nvme_keyring 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 976012 ']' 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 976012 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 976012 ']' 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 976012 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:45.058 01:24:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 976012 00:23:45.341 01:24:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:45.341 01:24:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:45.341 01:24:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 976012' 00:23:45.341 killing process with pid 976012 00:23:45.341 01:24:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 976012 00:23:45.341 [2024-07-25 01:24:07.550723] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:45.341 [2024-07-25 01:24:07.550748] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:45.341 01:24:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 976012 00:23:45.341 01:24:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:45.341 01:24:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:45.341 01:24:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:45.341 01:24:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:45.341 01:24:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:45.341 01:24:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.341 01:24:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:45.341 01:24:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.884 01:24:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:47.884 00:23:47.884 real 0m8.863s 00:23:47.884 user 0m3.147s 00:23:47.884 sys 0m4.184s 00:23:47.884 01:24:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:47.884 01:24:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:47.884 ************************************ 00:23:47.884 END TEST nvmf_async_init 00:23:47.884 ************************************ 00:23:47.884 01:24:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:47.884 01:24:09 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:47.884 01:24:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:47.884 01:24:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:47.884 01:24:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:47.884 ************************************ 00:23:47.884 START TEST dma 00:23:47.884 ************************************ 00:23:47.884 01:24:09 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:47.884 * Looking for test storage... 00:23:47.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:47.884 01:24:09 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:47.884 01:24:09 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:23:47.884 01:24:09 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:47.884 01:24:09 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:47.884 01:24:09 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:47.884 01:24:09 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:47.884 01:24:09 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:47.884 01:24:09 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:47.884 01:24:09 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:47.884 01:24:09 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:47.884 01:24:09 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:47.884 01:24:09 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:47.884 01:24:09 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:47.884 01:24:09 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:47.884 01:24:09 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:47.884 01:24:09 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:47.884 01:24:09 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:47.884 01:24:09 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:47.884 01:24:09 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:47.884 01:24:09 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:47.885 01:24:09 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:47.885 01:24:09 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:47.885 01:24:09 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.885 01:24:09 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.885 01:24:09 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.885 01:24:09 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:23:47.885 01:24:09 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.885 01:24:09 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:23:47.885 01:24:09 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:47.885 01:24:09 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:47.885 01:24:09 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:47.885 01:24:09 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:47.885 01:24:09 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:47.885 01:24:09 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:47.885 01:24:09 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:47.885 01:24:09 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:47.885 01:24:09 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:47.885 01:24:09 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:23:47.885 00:23:47.885 real 0m0.107s 00:23:47.885 user 0m0.047s 00:23:47.885 sys 0m0.065s 00:23:47.885 01:24:09 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:47.885 01:24:09 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:23:47.885 ************************************ 00:23:47.885 END TEST dma 00:23:47.885 ************************************ 00:23:47.885 01:24:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:47.885 01:24:09 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:47.885 01:24:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:47.885 01:24:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:47.885 01:24:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:47.885 ************************************ 00:23:47.885 START TEST nvmf_identify 00:23:47.885 ************************************ 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:47.885 * Looking for test storage... 00:23:47.885 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:23:47.885 01:24:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:53.174 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:53.174 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:53.174 Found net devices under 0000:86:00.0: cvl_0_0 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:53.174 Found net devices under 0000:86:00.1: cvl_0_1 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:53.174 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:53.175 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:53.175 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:53.175 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:53.175 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:53.175 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:53.175 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:53.175 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:53.175 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:53.175 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:53.175 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:53.175 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:53.175 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:53.175 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:53.175 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:53.175 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:53.175 01:24:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:53.175 01:24:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:53.175 01:24:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:53.175 01:24:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:53.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:53.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:23:53.175 00:23:53.175 --- 10.0.0.2 ping statistics --- 00:23:53.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.175 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:23:53.175 01:24:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:53.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:53.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.342 ms 00:23:53.175 00:23:53.175 --- 10.0.0.1 ping statistics --- 00:23:53.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.175 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:23:53.175 01:24:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:53.175 01:24:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:23:53.175 01:24:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:53.175 01:24:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:53.175 01:24:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:53.175 01:24:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:53.175 01:24:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:53.175 01:24:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:53.175 01:24:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:53.175 01:24:15 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:53.175 01:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:53.175 01:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:53.175 01:24:15 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=980193 00:23:53.175 01:24:15 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:53.175 01:24:15 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 980193 00:23:53.175 01:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 980193 ']' 00:23:53.175 01:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.175 01:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:53.175 01:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.175 01:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:53.175 01:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:53.175 01:24:15 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:53.175 [2024-07-25 01:24:15.127400] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:23:53.175 [2024-07-25 01:24:15.127442] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.175 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.175 [2024-07-25 01:24:15.183711] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:53.175 [2024-07-25 01:24:15.265502] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.175 [2024-07-25 01:24:15.265538] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.175 [2024-07-25 01:24:15.265544] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:53.175 [2024-07-25 01:24:15.265550] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:53.175 [2024-07-25 01:24:15.265556] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.175 [2024-07-25 01:24:15.265595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.175 [2024-07-25 01:24:15.265612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:53.175 [2024-07-25 01:24:15.265700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:53.175 [2024-07-25 01:24:15.265702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.747 01:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:53.747 01:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:23:53.747 01:24:15 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:53.747 01:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.747 01:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:53.747 [2024-07-25 01:24:15.939980] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.747 01:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.747 01:24:15 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:53.747 01:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:53.747 01:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:53.747 01:24:15 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:53.747 01:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.747 01:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:53.747 Malloc0 00:23:53.747 01:24:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.747 01:24:15 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:53.747 01:24:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.747 01:24:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:53.747 01:24:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.747 01:24:16 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:53.747 01:24:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.747 01:24:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:53.747 01:24:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.747 01:24:16 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:53.747 01:24:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.747 01:24:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:53.747 [2024-07-25 01:24:16.019771] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.747 01:24:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.747 01:24:16 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:53.747 01:24:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.747 01:24:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:53.747 01:24:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.747 01:24:16 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:53.747 01:24:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.747 01:24:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:53.747 [ 00:23:53.747 { 00:23:53.747 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:53.747 "subtype": "Discovery", 00:23:53.747 "listen_addresses": [ 00:23:53.747 { 00:23:53.747 "trtype": "TCP", 00:23:53.747 "adrfam": "IPv4", 00:23:53.747 "traddr": "10.0.0.2", 00:23:53.747 "trsvcid": "4420" 00:23:53.747 } 00:23:53.747 ], 00:23:53.747 "allow_any_host": true, 00:23:53.747 "hosts": [] 00:23:53.747 }, 00:23:53.747 { 00:23:53.747 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.747 "subtype": "NVMe", 00:23:53.747 "listen_addresses": [ 00:23:53.747 { 00:23:53.747 "trtype": "TCP", 00:23:53.747 "adrfam": "IPv4", 00:23:53.747 "traddr": "10.0.0.2", 00:23:53.747 "trsvcid": "4420" 00:23:53.747 } 00:23:53.747 ], 00:23:53.747 "allow_any_host": true, 00:23:53.747 "hosts": [], 00:23:53.747 "serial_number": "SPDK00000000000001", 00:23:53.747 "model_number": "SPDK bdev Controller", 00:23:53.747 "max_namespaces": 32, 00:23:53.747 "min_cntlid": 1, 00:23:53.747 "max_cntlid": 65519, 00:23:53.747 "namespaces": [ 00:23:53.747 { 00:23:53.747 "nsid": 1, 00:23:53.747 "bdev_name": "Malloc0", 00:23:53.747 "name": "Malloc0", 00:23:53.747 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:53.747 "eui64": "ABCDEF0123456789", 00:23:53.747 "uuid": "262b3f84-16de-4fb7-b239-95a8e7ae3489" 00:23:53.747 } 00:23:53.747 ] 00:23:53.747 } 00:23:53.747 ] 00:23:53.747 01:24:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.747 01:24:16 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:53.747 [2024-07-25 01:24:16.070556] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:23:53.747 [2024-07-25 01:24:16.070589] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid980325 ] 00:23:53.747 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.747 [2024-07-25 01:24:16.100589] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:53.747 [2024-07-25 01:24:16.100638] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:53.747 [2024-07-25 01:24:16.100642] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:53.747 [2024-07-25 01:24:16.100653] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:53.747 [2024-07-25 01:24:16.100659] sock.c: 353:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:53.747 [2024-07-25 01:24:16.101194] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:53.747 [2024-07-25 01:24:16.101225] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1088ec0 0 00:23:53.747 [2024-07-25 01:24:16.115054] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:53.747 [2024-07-25 01:24:16.115074] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:53.747 [2024-07-25 01:24:16.115078] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:53.747 [2024-07-25 01:24:16.115082] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:53.747 [2024-07-25 01:24:16.115120] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.747 [2024-07-25 01:24:16.115126] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.747 [2024-07-25 01:24:16.115130] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1088ec0) 00:23:53.747 [2024-07-25 01:24:16.115143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:53.747 [2024-07-25 01:24:16.115160] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x110be40, cid 0, qid 0 00:23:53.747 [2024-07-25 01:24:16.122052] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.747 [2024-07-25 01:24:16.122060] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.747 [2024-07-25 01:24:16.122063] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.747 [2024-07-25 01:24:16.122067] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x110be40) on tqpair=0x1088ec0 00:23:53.747 [2024-07-25 01:24:16.122077] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:53.747 [2024-07-25 01:24:16.122082] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:53.747 [2024-07-25 01:24:16.122087] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:53.747 [2024-07-25 01:24:16.122099] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.747 [2024-07-25 01:24:16.122103] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.747 [2024-07-25 01:24:16.122107] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1088ec0) 00:23:53.747 [2024-07-25 01:24:16.122113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.747 [2024-07-25 01:24:16.122125] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x110be40, cid 0, qid 0 00:23:53.747 [2024-07-25 01:24:16.122434] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.747 [2024-07-25 01:24:16.122445] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.747 [2024-07-25 01:24:16.122448] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.747 [2024-07-25 01:24:16.122452] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x110be40) on tqpair=0x1088ec0 00:23:53.747 [2024-07-25 01:24:16.122457] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:53.747 [2024-07-25 01:24:16.122468] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:53.747 [2024-07-25 01:24:16.122476] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.747 [2024-07-25 01:24:16.122480] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.747 [2024-07-25 01:24:16.122483] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1088ec0) 00:23:53.747 [2024-07-25 01:24:16.122489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.747 [2024-07-25 01:24:16.122502] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x110be40, cid 0, qid 0 00:23:53.747 [2024-07-25 01:24:16.122692] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.747 [2024-07-25 01:24:16.122702] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.747 [2024-07-25 01:24:16.122705] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.748 [2024-07-25 01:24:16.122708] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x110be40) on tqpair=0x1088ec0 00:23:53.748 [2024-07-25 01:24:16.122713] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:53.748 [2024-07-25 01:24:16.122721] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:53.748 [2024-07-25 01:24:16.122728] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.748 [2024-07-25 01:24:16.122732] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.748 [2024-07-25 01:24:16.122735] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1088ec0) 00:23:53.748 [2024-07-25 01:24:16.122741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.748 [2024-07-25 01:24:16.122754] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x110be40, cid 0, qid 0 00:23:53.748 [2024-07-25 01:24:16.122933] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.748 [2024-07-25 01:24:16.122942] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.748 [2024-07-25 01:24:16.122945] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.748 [2024-07-25 01:24:16.122949] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x110be40) on tqpair=0x1088ec0 00:23:53.748 [2024-07-25 01:24:16.122954] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:53.748 [2024-07-25 01:24:16.122965] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.748 [2024-07-25 01:24:16.122969] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.748 [2024-07-25 01:24:16.122972] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1088ec0) 00:23:53.748 [2024-07-25 01:24:16.122978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.748 [2024-07-25 01:24:16.122990] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x110be40, cid 0, qid 0 00:23:53.748 [2024-07-25 01:24:16.123145] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.748 [2024-07-25 01:24:16.123155] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.748 [2024-07-25 01:24:16.123158] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.748 [2024-07-25 01:24:16.123162] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x110be40) on tqpair=0x1088ec0 00:23:53.748 [2024-07-25 01:24:16.123166] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:53.748 [2024-07-25 01:24:16.123171] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:53.748 [2024-07-25 01:24:16.123182] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:53.748 [2024-07-25 01:24:16.123287] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:53.748 [2024-07-25 01:24:16.123291] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:53.748 [2024-07-25 01:24:16.123300] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.748 [2024-07-25 01:24:16.123304] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.748 [2024-07-25 01:24:16.123307] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1088ec0) 00:23:53.748 [2024-07-25 01:24:16.123313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.748 [2024-07-25 01:24:16.123326] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x110be40, cid 0, qid 0 00:23:53.748 [2024-07-25 01:24:16.123476] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.748 [2024-07-25 01:24:16.123485] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.748 [2024-07-25 01:24:16.123488] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.748 [2024-07-25 01:24:16.123491] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x110be40) on tqpair=0x1088ec0 00:23:53.748 [2024-07-25 01:24:16.123496] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:53.748 [2024-07-25 01:24:16.123507] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.748 [2024-07-25 01:24:16.123510] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.748 [2024-07-25 01:24:16.123513] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1088ec0) 00:23:53.748 [2024-07-25 01:24:16.123520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.748 [2024-07-25 01:24:16.123532] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x110be40, cid 0, qid 0 00:23:53.748 [2024-07-25 01:24:16.123677] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.748 [2024-07-25 01:24:16.123687] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.748 [2024-07-25 01:24:16.123690] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.748 [2024-07-25 01:24:16.123693] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x110be40) on tqpair=0x1088ec0 00:23:53.748 [2024-07-25 01:24:16.123697] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:53.748 [2024-07-25 01:24:16.123701] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:53.748 [2024-07-25 01:24:16.123710] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:53.748 [2024-07-25 01:24:16.123718] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:53.748 [2024-07-25 01:24:16.123728] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.748 [2024-07-25 01:24:16.123731] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1088ec0) 00:23:53.748 [2024-07-25 01:24:16.123738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.748 [2024-07-25 01:24:16.123750] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x110be40, cid 0, qid 0 00:23:53.748 [2024-07-25 01:24:16.123928] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:53.748 [2024-07-25 01:24:16.123938] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:53.748 [2024-07-25 01:24:16.123944] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:53.748 [2024-07-25 01:24:16.123948] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1088ec0): datao=0, datal=4096, cccid=0 00:23:53.748 [2024-07-25 01:24:16.123952] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x110be40) on tqpair(0x1088ec0): expected_datao=0, payload_size=4096 00:23:53.748 [2024-07-25 01:24:16.123956] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.748 [2024-07-25 01:24:16.124212] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:53.748 [2024-07-25 01:24:16.124216] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:53.748 [2024-07-25 01:24:16.168050] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.748 [2024-07-25 01:24:16.168059] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.748 [2024-07-25 01:24:16.168062] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.748 [2024-07-25 01:24:16.168065] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x110be40) on tqpair=0x1088ec0 00:23:53.748 [2024-07-25 01:24:16.168073] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:53.748 [2024-07-25 01:24:16.168080] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:53.748 [2024-07-25 01:24:16.168085] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:53.748 [2024-07-25 01:24:16.168089] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:53.748 [2024-07-25 01:24:16.168094] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:53.748 [2024-07-25 01:24:16.168098] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:53.748 [2024-07-25 01:24:16.168106] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:53.748 [2024-07-25 01:24:16.168113] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.748 [2024-07-25 01:24:16.168116] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.748 [2024-07-25 01:24:16.168119] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1088ec0) 00:23:53.748 [2024-07-25 01:24:16.168126] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:53.748 [2024-07-25 01:24:16.168139] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x110be40, cid 0, qid 0 00:23:53.748 [2024-07-25 01:24:16.168358] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.748 [2024-07-25 01:24:16.168367] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.748 [2024-07-25 01:24:16.168371] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.748 [2024-07-25 01:24:16.168374] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x110be40) on tqpair=0x1088ec0 00:23:53.748 [2024-07-25 01:24:16.168381] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.748 [2024-07-25 01:24:16.168385] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.748 [2024-07-25 01:24:16.168388] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1088ec0) 00:23:53.748 [2024-07-25 01:24:16.168394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:53.748 [2024-07-25 01:24:16.168400] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.748 [2024-07-25 01:24:16.168403] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.748 [2024-07-25 01:24:16.168406] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1088ec0) 00:23:53.748 [2024-07-25 01:24:16.168411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:53.748 [2024-07-25 01:24:16.168419] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.748 [2024-07-25 01:24:16.168422] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.748 [2024-07-25 01:24:16.168425] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1088ec0) 00:23:53.748 [2024-07-25 01:24:16.168430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:53.748 [2024-07-25 01:24:16.168435] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.748 [2024-07-25 01:24:16.168438] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.748 [2024-07-25 01:24:16.168441] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1088ec0) 00:23:53.749 [2024-07-25 01:24:16.168446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:53.749 [2024-07-25 01:24:16.168450] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:53.749 [2024-07-25 01:24:16.168462] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:53.749 [2024-07-25 01:24:16.168468] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.749 [2024-07-25 01:24:16.168471] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1088ec0) 00:23:53.749 [2024-07-25 01:24:16.168477] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.749 [2024-07-25 01:24:16.168490] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x110be40, cid 0, qid 0 00:23:53.749 [2024-07-25 01:24:16.168495] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x110bfc0, cid 1, qid 0 00:23:53.749 [2024-07-25 01:24:16.168499] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x110c140, cid 2, qid 0 00:23:53.749 [2024-07-25 01:24:16.168503] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x110c2c0, cid 3, qid 0 00:23:53.749 [2024-07-25 01:24:16.168507] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x110c440, cid 4, qid 0 00:23:53.749 [2024-07-25 01:24:16.168691] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.749 [2024-07-25 01:24:16.168701] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.749 [2024-07-25 01:24:16.168704] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.749 [2024-07-25 01:24:16.168707] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x110c440) on tqpair=0x1088ec0 00:23:53.749 [2024-07-25 01:24:16.168712] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:53.749 [2024-07-25 01:24:16.168717] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:53.749 [2024-07-25 01:24:16.168729] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.749 [2024-07-25 01:24:16.168732] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1088ec0) 00:23:53.749 [2024-07-25 01:24:16.168739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.749 [2024-07-25 01:24:16.168751] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x110c440, cid 4, qid 0 00:23:53.749 [2024-07-25 01:24:16.168938] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:53.749 [2024-07-25 01:24:16.168948] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:53.749 [2024-07-25 01:24:16.168951] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:53.749 [2024-07-25 01:24:16.168954] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1088ec0): datao=0, datal=4096, cccid=4 00:23:53.749 [2024-07-25 01:24:16.168958] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x110c440) on tqpair(0x1088ec0): expected_datao=0, payload_size=4096 00:23:53.749 [2024-07-25 01:24:16.168965] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.749 [2024-07-25 01:24:16.168971] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:53.749 [2024-07-25 01:24:16.168974] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:53.749 [2024-07-25 01:24:16.169078] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.749 [2024-07-25 01:24:16.169087] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.749 [2024-07-25 01:24:16.169090] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.749 [2024-07-25 01:24:16.169094] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x110c440) on tqpair=0x1088ec0 00:23:53.749 [2024-07-25 01:24:16.169107] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:53.749 [2024-07-25 01:24:16.169130] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.749 [2024-07-25 01:24:16.169134] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1088ec0) 00:23:53.749 [2024-07-25 01:24:16.169140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.749 [2024-07-25 01:24:16.169146] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.749 [2024-07-25 01:24:16.169150] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.749 [2024-07-25 01:24:16.169153] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1088ec0) 00:23:53.749 [2024-07-25 01:24:16.169158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:53.749 [2024-07-25 01:24:16.169174] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x110c440, cid 4, qid 0 00:23:53.749 [2024-07-25 01:24:16.169179] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x110c5c0, cid 5, qid 0 00:23:53.749 [2024-07-25 01:24:16.169380] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:53.749 [2024-07-25 01:24:16.169390] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:53.749 [2024-07-25 01:24:16.169393] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:53.749 [2024-07-25 01:24:16.169396] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1088ec0): datao=0, datal=1024, cccid=4 00:23:53.749 [2024-07-25 01:24:16.169400] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x110c440) on tqpair(0x1088ec0): expected_datao=0, payload_size=1024 00:23:53.749 [2024-07-25 01:24:16.169404] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.749 [2024-07-25 01:24:16.169410] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:53.749 [2024-07-25 01:24:16.169413] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:53.749 [2024-07-25 01:24:16.169418] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.749 [2024-07-25 01:24:16.169423] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.749 [2024-07-25 01:24:16.169426] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.749 [2024-07-25 01:24:16.169429] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x110c5c0) on tqpair=0x1088ec0 00:23:53.749 [2024-07-25 01:24:16.210271] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.749 [2024-07-25 01:24:16.210286] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.749 [2024-07-25 01:24:16.210289] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.749 [2024-07-25 01:24:16.210293] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x110c440) on tqpair=0x1088ec0 00:23:53.749 [2024-07-25 01:24:16.210309] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.749 [2024-07-25 01:24:16.210313] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1088ec0) 00:23:53.749 [2024-07-25 01:24:16.210320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.749 [2024-07-25 01:24:16.210343] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x110c440, cid 4, qid 0 00:23:53.749 [2024-07-25 01:24:16.210503] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:53.749 [2024-07-25 01:24:16.210513] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:53.749 [2024-07-25 01:24:16.210516] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:53.749 [2024-07-25 01:24:16.210519] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1088ec0): datao=0, datal=3072, cccid=4 00:23:53.749 [2024-07-25 01:24:16.210523] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x110c440) on tqpair(0x1088ec0): expected_datao=0, payload_size=3072 00:23:53.749 [2024-07-25 01:24:16.210527] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.749 [2024-07-25 01:24:16.210802] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:53.749 [2024-07-25 01:24:16.210806] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:54.014 [2024-07-25 01:24:16.251403] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.014 [2024-07-25 01:24:16.251413] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.014 [2024-07-25 01:24:16.251417] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.014 [2024-07-25 01:24:16.251420] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x110c440) on tqpair=0x1088ec0 00:23:54.014 [2024-07-25 01:24:16.251430] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.014 [2024-07-25 01:24:16.251433] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1088ec0) 00:23:54.014 [2024-07-25 01:24:16.251440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.014 [2024-07-25 01:24:16.251456] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x110c440, cid 4, qid 0 00:23:54.014 [2024-07-25 01:24:16.251610] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:54.014 [2024-07-25 01:24:16.251620] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:54.014 [2024-07-25 01:24:16.251623] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:54.014 [2024-07-25 01:24:16.251626] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1088ec0): datao=0, datal=8, cccid=4 00:23:54.014 [2024-07-25 01:24:16.251630] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x110c440) on tqpair(0x1088ec0): expected_datao=0, payload_size=8 00:23:54.014 [2024-07-25 01:24:16.251634] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.014 [2024-07-25 01:24:16.251639] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:54.014 [2024-07-25 01:24:16.251642] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:54.014 [2024-07-25 01:24:16.296054] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.014 [2024-07-25 01:24:16.296066] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.014 [2024-07-25 01:24:16.296069] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.014 [2024-07-25 01:24:16.296073] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x110c440) on tqpair=0x1088ec0 00:23:54.014 ===================================================== 00:23:54.014 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:54.014 ===================================================== 00:23:54.014 Controller Capabilities/Features 00:23:54.014 ================================ 00:23:54.014 Vendor ID: 0000 00:23:54.014 Subsystem Vendor ID: 0000 00:23:54.014 Serial Number: .................... 00:23:54.014 Model Number: ........................................ 00:23:54.014 Firmware Version: 24.09 00:23:54.014 Recommended Arb Burst: 0 00:23:54.014 IEEE OUI Identifier: 00 00 00 00:23:54.014 Multi-path I/O 00:23:54.014 May have multiple subsystem ports: No 00:23:54.014 May have multiple controllers: No 00:23:54.014 Associated with SR-IOV VF: No 00:23:54.014 Max Data Transfer Size: 131072 00:23:54.014 Max Number of Namespaces: 0 00:23:54.014 Max Number of I/O Queues: 1024 00:23:54.014 NVMe Specification Version (VS): 1.3 00:23:54.014 NVMe Specification Version (Identify): 1.3 00:23:54.014 Maximum Queue Entries: 128 00:23:54.014 Contiguous Queues Required: Yes 00:23:54.014 Arbitration Mechanisms Supported 00:23:54.014 Weighted Round Robin: Not Supported 00:23:54.014 Vendor Specific: Not Supported 00:23:54.014 Reset Timeout: 15000 ms 00:23:54.014 Doorbell Stride: 4 bytes 00:23:54.014 NVM Subsystem Reset: Not Supported 00:23:54.014 Command Sets Supported 00:23:54.014 NVM Command Set: Supported 00:23:54.014 Boot Partition: Not Supported 00:23:54.014 Memory Page Size Minimum: 4096 bytes 00:23:54.014 Memory Page Size Maximum: 4096 bytes 00:23:54.014 Persistent Memory Region: Not Supported 00:23:54.014 Optional Asynchronous Events Supported 00:23:54.014 Namespace Attribute Notices: Not Supported 00:23:54.014 Firmware Activation Notices: Not Supported 00:23:54.014 ANA Change Notices: Not Supported 00:23:54.014 PLE Aggregate Log Change Notices: Not Supported 00:23:54.014 LBA Status Info Alert Notices: Not Supported 00:23:54.014 EGE Aggregate Log Change Notices: Not Supported 00:23:54.014 Normal NVM Subsystem Shutdown event: Not Supported 00:23:54.014 Zone Descriptor Change Notices: Not Supported 00:23:54.014 Discovery Log Change Notices: Supported 00:23:54.014 Controller Attributes 00:23:54.014 128-bit Host Identifier: Not Supported 00:23:54.014 Non-Operational Permissive Mode: Not Supported 00:23:54.014 NVM Sets: Not Supported 00:23:54.015 Read Recovery Levels: Not Supported 00:23:54.015 Endurance Groups: Not Supported 00:23:54.015 Predictable Latency Mode: Not Supported 00:23:54.015 Traffic Based Keep ALive: Not Supported 00:23:54.015 Namespace Granularity: Not Supported 00:23:54.015 SQ Associations: Not Supported 00:23:54.015 UUID List: Not Supported 00:23:54.015 Multi-Domain Subsystem: Not Supported 00:23:54.015 Fixed Capacity Management: Not Supported 00:23:54.015 Variable Capacity Management: Not Supported 00:23:54.015 Delete Endurance Group: Not Supported 00:23:54.015 Delete NVM Set: Not Supported 00:23:54.015 Extended LBA Formats Supported: Not Supported 00:23:54.015 Flexible Data Placement Supported: Not Supported 00:23:54.015 00:23:54.015 Controller Memory Buffer Support 00:23:54.015 ================================ 00:23:54.015 Supported: No 00:23:54.015 00:23:54.015 Persistent Memory Region Support 00:23:54.015 ================================ 00:23:54.015 Supported: No 00:23:54.015 00:23:54.015 Admin Command Set Attributes 00:23:54.015 ============================ 00:23:54.015 Security Send/Receive: Not Supported 00:23:54.015 Format NVM: Not Supported 00:23:54.015 Firmware Activate/Download: Not Supported 00:23:54.015 Namespace Management: Not Supported 00:23:54.015 Device Self-Test: Not Supported 00:23:54.015 Directives: Not Supported 00:23:54.015 NVMe-MI: Not Supported 00:23:54.015 Virtualization Management: Not Supported 00:23:54.015 Doorbell Buffer Config: Not Supported 00:23:54.015 Get LBA Status Capability: Not Supported 00:23:54.015 Command & Feature Lockdown Capability: Not Supported 00:23:54.015 Abort Command Limit: 1 00:23:54.015 Async Event Request Limit: 4 00:23:54.015 Number of Firmware Slots: N/A 00:23:54.015 Firmware Slot 1 Read-Only: N/A 00:23:54.015 Firmware Activation Without Reset: N/A 00:23:54.015 Multiple Update Detection Support: N/A 00:23:54.015 Firmware Update Granularity: No Information Provided 00:23:54.015 Per-Namespace SMART Log: No 00:23:54.015 Asymmetric Namespace Access Log Page: Not Supported 00:23:54.015 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:54.015 Command Effects Log Page: Not Supported 00:23:54.015 Get Log Page Extended Data: Supported 00:23:54.015 Telemetry Log Pages: Not Supported 00:23:54.015 Persistent Event Log Pages: Not Supported 00:23:54.015 Supported Log Pages Log Page: May Support 00:23:54.015 Commands Supported & Effects Log Page: Not Supported 00:23:54.015 Feature Identifiers & Effects Log Page:May Support 00:23:54.015 NVMe-MI Commands & Effects Log Page: May Support 00:23:54.015 Data Area 4 for Telemetry Log: Not Supported 00:23:54.015 Error Log Page Entries Supported: 128 00:23:54.015 Keep Alive: Not Supported 00:23:54.015 00:23:54.015 NVM Command Set Attributes 00:23:54.015 ========================== 00:23:54.015 Submission Queue Entry Size 00:23:54.015 Max: 1 00:23:54.015 Min: 1 00:23:54.015 Completion Queue Entry Size 00:23:54.015 Max: 1 00:23:54.015 Min: 1 00:23:54.015 Number of Namespaces: 0 00:23:54.015 Compare Command: Not Supported 00:23:54.015 Write Uncorrectable Command: Not Supported 00:23:54.015 Dataset Management Command: Not Supported 00:23:54.015 Write Zeroes Command: Not Supported 00:23:54.015 Set Features Save Field: Not Supported 00:23:54.015 Reservations: Not Supported 00:23:54.015 Timestamp: Not Supported 00:23:54.015 Copy: Not Supported 00:23:54.015 Volatile Write Cache: Not Present 00:23:54.015 Atomic Write Unit (Normal): 1 00:23:54.015 Atomic Write Unit (PFail): 1 00:23:54.015 Atomic Compare & Write Unit: 1 00:23:54.015 Fused Compare & Write: Supported 00:23:54.015 Scatter-Gather List 00:23:54.015 SGL Command Set: Supported 00:23:54.015 SGL Keyed: Supported 00:23:54.015 SGL Bit Bucket Descriptor: Not Supported 00:23:54.015 SGL Metadata Pointer: Not Supported 00:23:54.015 Oversized SGL: Not Supported 00:23:54.015 SGL Metadata Address: Not Supported 00:23:54.015 SGL Offset: Supported 00:23:54.015 Transport SGL Data Block: Not Supported 00:23:54.015 Replay Protected Memory Block: Not Supported 00:23:54.015 00:23:54.015 Firmware Slot Information 00:23:54.015 ========================= 00:23:54.015 Active slot: 0 00:23:54.015 00:23:54.015 00:23:54.015 Error Log 00:23:54.015 ========= 00:23:54.015 00:23:54.015 Active Namespaces 00:23:54.015 ================= 00:23:54.015 Discovery Log Page 00:23:54.015 ================== 00:23:54.015 Generation Counter: 2 00:23:54.015 Number of Records: 2 00:23:54.015 Record Format: 0 00:23:54.015 00:23:54.015 Discovery Log Entry 0 00:23:54.015 ---------------------- 00:23:54.015 Transport Type: 3 (TCP) 00:23:54.015 Address Family: 1 (IPv4) 00:23:54.015 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:54.015 Entry Flags: 00:23:54.015 Duplicate Returned Information: 1 00:23:54.015 Explicit Persistent Connection Support for Discovery: 1 00:23:54.015 Transport Requirements: 00:23:54.015 Secure Channel: Not Required 00:23:54.015 Port ID: 0 (0x0000) 00:23:54.015 Controller ID: 65535 (0xffff) 00:23:54.015 Admin Max SQ Size: 128 00:23:54.015 Transport Service Identifier: 4420 00:23:54.015 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:54.015 Transport Address: 10.0.0.2 00:23:54.015 Discovery Log Entry 1 00:23:54.015 ---------------------- 00:23:54.015 Transport Type: 3 (TCP) 00:23:54.015 Address Family: 1 (IPv4) 00:23:54.015 Subsystem Type: 2 (NVM Subsystem) 00:23:54.015 Entry Flags: 00:23:54.015 Duplicate Returned Information: 0 00:23:54.015 Explicit Persistent Connection Support for Discovery: 0 00:23:54.015 Transport Requirements: 00:23:54.015 Secure Channel: Not Required 00:23:54.015 Port ID: 0 (0x0000) 00:23:54.015 Controller ID: 65535 (0xffff) 00:23:54.015 Admin Max SQ Size: 128 00:23:54.015 Transport Service Identifier: 4420 00:23:54.015 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:54.015 Transport Address: 10.0.0.2 [2024-07-25 01:24:16.296155] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:54.015 [2024-07-25 01:24:16.296165] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x110be40) on tqpair=0x1088ec0 00:23:54.015 [2024-07-25 01:24:16.296171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.015 [2024-07-25 01:24:16.296176] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x110bfc0) on tqpair=0x1088ec0 00:23:54.015 [2024-07-25 01:24:16.296180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.015 [2024-07-25 01:24:16.296184] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x110c140) on tqpair=0x1088ec0 00:23:54.015 [2024-07-25 01:24:16.296189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.015 [2024-07-25 01:24:16.296193] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x110c2c0) on tqpair=0x1088ec0 00:23:54.015 [2024-07-25 01:24:16.296197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.015 [2024-07-25 01:24:16.296207] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.015 [2024-07-25 01:24:16.296211] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.015 [2024-07-25 01:24:16.296214] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1088ec0) 00:23:54.015 [2024-07-25 01:24:16.296221] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.015 [2024-07-25 01:24:16.296235] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x110c2c0, cid 3, qid 0 00:23:54.015 [2024-07-25 01:24:16.296398] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.015 [2024-07-25 01:24:16.296408] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.015 [2024-07-25 01:24:16.296412] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.015 [2024-07-25 01:24:16.296415] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x110c2c0) on tqpair=0x1088ec0 00:23:54.015 [2024-07-25 01:24:16.296423] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.015 [2024-07-25 01:24:16.296426] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.015 [2024-07-25 01:24:16.296429] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1088ec0) 00:23:54.015 [2024-07-25 01:24:16.296436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.015 [2024-07-25 01:24:16.296453] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x110c2c0, cid 3, qid 0 00:23:54.015 [2024-07-25 01:24:16.296623] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.015 [2024-07-25 01:24:16.296633] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.015 [2024-07-25 01:24:16.296636] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.016 [2024-07-25 01:24:16.296639] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x110c2c0) on tqpair=0x1088ec0 00:23:54.016 [2024-07-25 01:24:16.296644] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:54.016 [2024-07-25 01:24:16.296648] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:54.016 [2024-07-25 01:24:16.296659] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.016 [2024-07-25 01:24:16.296663] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.016 [2024-07-25 01:24:16.296666] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1088ec0) 00:23:54.016 [2024-07-25 01:24:16.296672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.016 [2024-07-25 01:24:16.296684] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x110c2c0, cid 3, qid 0 00:23:54.016 [2024-07-25 01:24:16.296831] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.016 [2024-07-25 01:24:16.296841] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.016 [2024-07-25 01:24:16.296844] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.016 [2024-07-25 01:24:16.296847] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x110c2c0) on tqpair=0x1088ec0 00:23:54.016 [2024-07-25 01:24:16.296859] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.016 [2024-07-25 01:24:16.296863] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.016 [2024-07-25 01:24:16.296866] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1088ec0) 00:23:54.016 [2024-07-25 01:24:16.296872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.016 [2024-07-25 01:24:16.296887] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x110c2c0, cid 3, qid 0 00:23:54.016 [2024-07-25 01:24:16.297037] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.016 [2024-07-25 01:24:16.297053] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.016 [2024-07-25 01:24:16.297056] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.016 [2024-07-25 01:24:16.297060] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x110c2c0) on tqpair=0x1088ec0 00:23:54.016 [2024-07-25 01:24:16.297071] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.016 [2024-07-25 01:24:16.297075] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.016 [2024-07-25 01:24:16.297078] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1088ec0) 00:23:54.016 [2024-07-25 01:24:16.297084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.016 [2024-07-25 01:24:16.297097] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x110c2c0, cid 3, qid 0 00:23:54.016 [2024-07-25 01:24:16.297247] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.016 [2024-07-25 01:24:16.297256] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.016 [2024-07-25 01:24:16.297259] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.016 [2024-07-25 01:24:16.297262] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x110c2c0) on tqpair=0x1088ec0 00:23:54.016 [2024-07-25 01:24:16.297273] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.016 [2024-07-25 01:24:16.297277] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.016 [2024-07-25 01:24:16.297280] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1088ec0) 00:23:54.016 [2024-07-25 01:24:16.297286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.016 [2024-07-25 01:24:16.297298] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x110c2c0, cid 3, qid 0 00:23:54.016 [2024-07-25 01:24:16.297445] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.016 [2024-07-25 01:24:16.297455] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.016 [2024-07-25 01:24:16.297458] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.016 [2024-07-25 01:24:16.297461] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x110c2c0) on tqpair=0x1088ec0 00:23:54.016 [2024-07-25 01:24:16.297472] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.016 [2024-07-25 01:24:16.297476] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.016 [2024-07-25 01:24:16.297479] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1088ec0) 00:23:54.016 [2024-07-25 01:24:16.297486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.016 [2024-07-25 01:24:16.297498] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x110c2c0, cid 3, qid 0 00:23:54.016 [2024-07-25 01:24:16.297651] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.016 [2024-07-25 01:24:16.297661] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.016 [2024-07-25 01:24:16.297664] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.016 [2024-07-25 01:24:16.297668] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x110c2c0) on tqpair=0x1088ec0 00:23:54.016 [2024-07-25 01:24:16.297678] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.016 [2024-07-25 01:24:16.297682] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.016 [2024-07-25 01:24:16.297685] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1088ec0) 00:23:54.016 [2024-07-25 01:24:16.297692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.016 [2024-07-25 01:24:16.297706] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x110c2c0, cid 3, qid 0 00:23:54.016 [2024-07-25 01:24:16.297853] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.016 [2024-07-25 01:24:16.297862] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.016 [2024-07-25 01:24:16.297865] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.016 [2024-07-25 01:24:16.297869] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x110c2c0) on tqpair=0x1088ec0 00:23:54.016 [2024-07-25 01:24:16.297880] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.016 [2024-07-25 01:24:16.297884] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.016 [2024-07-25 01:24:16.297887] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1088ec0) 00:23:54.016 [2024-07-25 01:24:16.297893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.016 [2024-07-25 01:24:16.297905] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x110c2c0, cid 3, qid 0 00:23:54.016 [2024-07-25 01:24:16.298058] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.016 [2024-07-25 01:24:16.298068] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.016 [2024-07-25 01:24:16.298071] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.016 [2024-07-25 01:24:16.298075] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x110c2c0) on tqpair=0x1088ec0 00:23:54.016 [2024-07-25 01:24:16.298085] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.016 [2024-07-25 01:24:16.298089] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.016 [2024-07-25 01:24:16.298092] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1088ec0) 00:23:54.016 [2024-07-25 01:24:16.298098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.016 [2024-07-25 01:24:16.298111] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x110c2c0, cid 3, qid 0 00:23:54.016 [2024-07-25 01:24:16.298264] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.016 [2024-07-25 01:24:16.298274] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.016 [2024-07-25 01:24:16.298277] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.016 [2024-07-25 01:24:16.298280] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x110c2c0) on tqpair=0x1088ec0 00:23:54.016 [2024-07-25 01:24:16.298290] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.016 [2024-07-25 01:24:16.298294] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.016 [2024-07-25 01:24:16.298297] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1088ec0) 00:23:54.016 [2024-07-25 01:24:16.298304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.016 [2024-07-25 01:24:16.298316] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x110c2c0, cid 3, qid 0 00:23:54.016 [2024-07-25 01:24:16.298463] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.016 [2024-07-25 01:24:16.298473] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.016 [2024-07-25 01:24:16.298476] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.016 [2024-07-25 01:24:16.298479] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x110c2c0) on tqpair=0x1088ec0 00:23:54.016 [2024-07-25 01:24:16.298489] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.016 [2024-07-25 01:24:16.298493] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.016 [2024-07-25 01:24:16.298496] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1088ec0) 00:23:54.016 [2024-07-25 01:24:16.298503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.016 [2024-07-25 01:24:16.298515] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x110c2c0, cid 3, qid 0 00:23:54.016 [2024-07-25 01:24:16.298667] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.016 [2024-07-25 01:24:16.298677] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.016 [2024-07-25 01:24:16.298679] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.016 [2024-07-25 01:24:16.298683] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x110c2c0) on tqpair=0x1088ec0 00:23:54.016 [2024-07-25 01:24:16.298694] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.016 [2024-07-25 01:24:16.298697] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.016 [2024-07-25 01:24:16.298700] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1088ec0) 00:23:54.016 [2024-07-25 01:24:16.298707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.016 [2024-07-25 01:24:16.298718] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x110c2c0, cid 3, qid 0 00:23:54.016 [2024-07-25 01:24:16.298869] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.016 [2024-07-25 01:24:16.298878] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.016 [2024-07-25 01:24:16.298881] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.016 [2024-07-25 01:24:16.298884] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x110c2c0) on tqpair=0x1088ec0 00:23:54.016 [2024-07-25 01:24:16.298895] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.016 [2024-07-25 01:24:16.298899] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.016 [2024-07-25 01:24:16.298902] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1088ec0) 00:23:54.017 [2024-07-25 01:24:16.298908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.017 [2024-07-25 01:24:16.298920] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x110c2c0, cid 3, qid 0 00:23:54.017 [2024-07-25 01:24:16.299075] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.017 [2024-07-25 01:24:16.299085] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.017 [2024-07-25 01:24:16.299088] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.017 [2024-07-25 01:24:16.299092] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x110c2c0) on tqpair=0x1088ec0 00:23:54.017 [2024-07-25 01:24:16.299102] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.017 [2024-07-25 01:24:16.299106] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.017 [2024-07-25 01:24:16.299109] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1088ec0) 00:23:54.017 [2024-07-25 01:24:16.299116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.017 [2024-07-25 01:24:16.299128] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x110c2c0, cid 3, qid 0 00:23:54.017 [2024-07-25 01:24:16.299273] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.017 [2024-07-25 01:24:16.299282] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.017 [2024-07-25 01:24:16.299285] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.017 [2024-07-25 01:24:16.299288] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x110c2c0) on tqpair=0x1088ec0 00:23:54.017 [2024-07-25 01:24:16.299299] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.017 [2024-07-25 01:24:16.299302] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.017 [2024-07-25 01:24:16.299305] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1088ec0) 00:23:54.017 [2024-07-25 01:24:16.299312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.017 [2024-07-25 01:24:16.299323] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x110c2c0, cid 3, qid 0 00:23:54.017 [2024-07-25 01:24:16.299471] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.017 [2024-07-25 01:24:16.299485] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.017 [2024-07-25 01:24:16.299488] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.017 [2024-07-25 01:24:16.299492] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x110c2c0) on tqpair=0x1088ec0 00:23:54.017 [2024-07-25 01:24:16.299502] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.017 [2024-07-25 01:24:16.299506] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.017 [2024-07-25 01:24:16.299509] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1088ec0) 00:23:54.017 [2024-07-25 01:24:16.299515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.017 [2024-07-25 01:24:16.299527] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x110c2c0, cid 3, qid 0 00:23:54.017 [2024-07-25 01:24:16.299672] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.017 [2024-07-25 01:24:16.299681] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.017 [2024-07-25 01:24:16.299684] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.017 [2024-07-25 01:24:16.299687] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x110c2c0) on tqpair=0x1088ec0 00:23:54.017 [2024-07-25 01:24:16.299698] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.017 [2024-07-25 01:24:16.299702] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.017 [2024-07-25 01:24:16.299705] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1088ec0) 00:23:54.017 [2024-07-25 01:24:16.299711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.017 [2024-07-25 01:24:16.299723] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x110c2c0, cid 3, qid 0 00:23:54.017 [2024-07-25 01:24:16.299870] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.017 [2024-07-25 01:24:16.299879] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.017 [2024-07-25 01:24:16.299882] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.017 [2024-07-25 01:24:16.299885] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x110c2c0) on tqpair=0x1088ec0 00:23:54.017 [2024-07-25 01:24:16.299896] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.017 [2024-07-25 01:24:16.299899] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.017 [2024-07-25 01:24:16.299902] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1088ec0) 00:23:54.017 [2024-07-25 01:24:16.299909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.017 [2024-07-25 01:24:16.299921] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x110c2c0, cid 3, qid 0 00:23:54.017 [2024-07-25 01:24:16.304051] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.017 [2024-07-25 01:24:16.304064] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.017 [2024-07-25 01:24:16.304067] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.017 [2024-07-25 01:24:16.304071] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x110c2c0) on tqpair=0x1088ec0 00:23:54.017 [2024-07-25 01:24:16.304082] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.017 [2024-07-25 01:24:16.304086] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.017 [2024-07-25 01:24:16.304089] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1088ec0) 00:23:54.017 [2024-07-25 01:24:16.304096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.017 [2024-07-25 01:24:16.304109] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x110c2c0, cid 3, qid 0 00:23:54.017 [2024-07-25 01:24:16.304323] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.017 [2024-07-25 01:24:16.304332] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.017 [2024-07-25 01:24:16.304338] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.017 [2024-07-25 01:24:16.304342] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x110c2c0) on tqpair=0x1088ec0 00:23:54.017 [2024-07-25 01:24:16.304350] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:23:54.017 00:23:54.017 01:24:16 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:54.017 [2024-07-25 01:24:16.344280] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:23:54.017 [2024-07-25 01:24:16.344315] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid980443 ] 00:23:54.017 EAL: No free 2048 kB hugepages reported on node 1 00:23:54.017 [2024-07-25 01:24:16.371276] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:54.017 [2024-07-25 01:24:16.371317] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:54.017 [2024-07-25 01:24:16.371322] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:54.017 [2024-07-25 01:24:16.371333] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:54.017 [2024-07-25 01:24:16.371339] sock.c: 353:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:54.017 [2024-07-25 01:24:16.371799] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:54.017 [2024-07-25 01:24:16.371824] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x237cec0 0 00:23:54.017 [2024-07-25 01:24:16.385080] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:54.017 [2024-07-25 01:24:16.385098] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:54.017 [2024-07-25 01:24:16.385102] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:54.017 [2024-07-25 01:24:16.385106] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:54.017 [2024-07-25 01:24:16.385136] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.017 [2024-07-25 01:24:16.385141] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.017 [2024-07-25 01:24:16.385144] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x237cec0) 00:23:54.017 [2024-07-25 01:24:16.385154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:54.017 [2024-07-25 01:24:16.385171] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23ffe40, cid 0, qid 0 00:23:54.017 [2024-07-25 01:24:16.393051] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.017 [2024-07-25 01:24:16.393059] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.017 [2024-07-25 01:24:16.393062] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.017 [2024-07-25 01:24:16.393065] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23ffe40) on tqpair=0x237cec0 00:23:54.017 [2024-07-25 01:24:16.393076] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:54.017 [2024-07-25 01:24:16.393082] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:54.017 [2024-07-25 01:24:16.393086] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:54.017 [2024-07-25 01:24:16.393096] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.017 [2024-07-25 01:24:16.393103] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.017 [2024-07-25 01:24:16.393106] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x237cec0) 00:23:54.017 [2024-07-25 01:24:16.393113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.017 [2024-07-25 01:24:16.393126] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23ffe40, cid 0, qid 0 00:23:54.017 [2024-07-25 01:24:16.393384] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.017 [2024-07-25 01:24:16.393398] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.017 [2024-07-25 01:24:16.393402] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.017 [2024-07-25 01:24:16.393406] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23ffe40) on tqpair=0x237cec0 00:23:54.017 [2024-07-25 01:24:16.393411] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:54.017 [2024-07-25 01:24:16.393419] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:54.017 [2024-07-25 01:24:16.393427] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.017 [2024-07-25 01:24:16.393431] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.018 [2024-07-25 01:24:16.393434] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x237cec0) 00:23:54.018 [2024-07-25 01:24:16.393442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.018 [2024-07-25 01:24:16.393456] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23ffe40, cid 0, qid 0 00:23:54.018 [2024-07-25 01:24:16.393602] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.018 [2024-07-25 01:24:16.393611] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.018 [2024-07-25 01:24:16.393614] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.018 [2024-07-25 01:24:16.393618] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23ffe40) on tqpair=0x237cec0 00:23:54.018 [2024-07-25 01:24:16.393623] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:54.018 [2024-07-25 01:24:16.393631] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:54.018 [2024-07-25 01:24:16.393639] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.018 [2024-07-25 01:24:16.393642] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.018 [2024-07-25 01:24:16.393645] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x237cec0) 00:23:54.018 [2024-07-25 01:24:16.393652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.018 [2024-07-25 01:24:16.393665] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23ffe40, cid 0, qid 0 00:23:54.018 [2024-07-25 01:24:16.393811] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.018 [2024-07-25 01:24:16.393820] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.018 [2024-07-25 01:24:16.393823] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.018 [2024-07-25 01:24:16.393827] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23ffe40) on tqpair=0x237cec0 00:23:54.018 [2024-07-25 01:24:16.393832] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:54.018 [2024-07-25 01:24:16.393842] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.018 [2024-07-25 01:24:16.393846] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.018 [2024-07-25 01:24:16.393849] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x237cec0) 00:23:54.018 [2024-07-25 01:24:16.393857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.018 [2024-07-25 01:24:16.393873] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23ffe40, cid 0, qid 0 00:23:54.018 [2024-07-25 01:24:16.394015] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.018 [2024-07-25 01:24:16.394024] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.018 [2024-07-25 01:24:16.394027] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.018 [2024-07-25 01:24:16.394030] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23ffe40) on tqpair=0x237cec0 00:23:54.018 [2024-07-25 01:24:16.394034] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:54.018 [2024-07-25 01:24:16.394039] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:54.018 [2024-07-25 01:24:16.394053] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:54.018 [2024-07-25 01:24:16.394158] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:54.018 [2024-07-25 01:24:16.394161] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:54.018 [2024-07-25 01:24:16.394169] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.018 [2024-07-25 01:24:16.394172] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.018 [2024-07-25 01:24:16.394176] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x237cec0) 00:23:54.018 [2024-07-25 01:24:16.394183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.018 [2024-07-25 01:24:16.394196] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23ffe40, cid 0, qid 0 00:23:54.018 [2024-07-25 01:24:16.394346] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.018 [2024-07-25 01:24:16.394355] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.018 [2024-07-25 01:24:16.394358] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.018 [2024-07-25 01:24:16.394361] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23ffe40) on tqpair=0x237cec0 00:23:54.018 [2024-07-25 01:24:16.394366] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:54.018 [2024-07-25 01:24:16.394377] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.018 [2024-07-25 01:24:16.394380] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.018 [2024-07-25 01:24:16.394383] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x237cec0) 00:23:54.018 [2024-07-25 01:24:16.394390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.018 [2024-07-25 01:24:16.394403] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23ffe40, cid 0, qid 0 00:23:54.018 [2024-07-25 01:24:16.394551] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.018 [2024-07-25 01:24:16.394561] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.018 [2024-07-25 01:24:16.394564] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.018 [2024-07-25 01:24:16.394567] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23ffe40) on tqpair=0x237cec0 00:23:54.018 [2024-07-25 01:24:16.394571] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:54.018 [2024-07-25 01:24:16.394576] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:54.018 [2024-07-25 01:24:16.394584] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:54.018 [2024-07-25 01:24:16.394592] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:54.018 [2024-07-25 01:24:16.394603] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.018 [2024-07-25 01:24:16.394607] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x237cec0) 00:23:54.018 [2024-07-25 01:24:16.394614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.018 [2024-07-25 01:24:16.394627] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23ffe40, cid 0, qid 0 00:23:54.018 [2024-07-25 01:24:16.394803] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:54.018 [2024-07-25 01:24:16.394813] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:54.018 [2024-07-25 01:24:16.394816] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:54.018 [2024-07-25 01:24:16.394820] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x237cec0): datao=0, datal=4096, cccid=0 00:23:54.018 [2024-07-25 01:24:16.394824] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23ffe40) on tqpair(0x237cec0): expected_datao=0, payload_size=4096 00:23:54.018 [2024-07-25 01:24:16.394827] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.018 [2024-07-25 01:24:16.395075] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:54.018 [2024-07-25 01:24:16.395080] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:54.018 [2024-07-25 01:24:16.438049] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.018 [2024-07-25 01:24:16.438059] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.018 [2024-07-25 01:24:16.438062] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.018 [2024-07-25 01:24:16.438066] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23ffe40) on tqpair=0x237cec0 00:23:54.018 [2024-07-25 01:24:16.438073] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:54.018 [2024-07-25 01:24:16.438080] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:54.018 [2024-07-25 01:24:16.438084] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:54.018 [2024-07-25 01:24:16.438088] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:54.018 [2024-07-25 01:24:16.438092] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:54.018 [2024-07-25 01:24:16.438096] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:54.018 [2024-07-25 01:24:16.438104] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:54.018 [2024-07-25 01:24:16.438111] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.018 [2024-07-25 01:24:16.438114] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.018 [2024-07-25 01:24:16.438117] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x237cec0) 00:23:54.018 [2024-07-25 01:24:16.438125] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:54.018 [2024-07-25 01:24:16.438139] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23ffe40, cid 0, qid 0 00:23:54.018 [2024-07-25 01:24:16.438370] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.018 [2024-07-25 01:24:16.438380] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.018 [2024-07-25 01:24:16.438383] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.018 [2024-07-25 01:24:16.438386] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23ffe40) on tqpair=0x237cec0 00:23:54.018 [2024-07-25 01:24:16.438393] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.018 [2024-07-25 01:24:16.438397] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.018 [2024-07-25 01:24:16.438403] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x237cec0) 00:23:54.018 [2024-07-25 01:24:16.438410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.018 [2024-07-25 01:24:16.438415] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.018 [2024-07-25 01:24:16.438418] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.018 [2024-07-25 01:24:16.438421] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x237cec0) 00:23:54.018 [2024-07-25 01:24:16.438426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.018 [2024-07-25 01:24:16.438431] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.018 [2024-07-25 01:24:16.438434] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.018 [2024-07-25 01:24:16.438437] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x237cec0) 00:23:54.019 [2024-07-25 01:24:16.438442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.019 [2024-07-25 01:24:16.438447] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.019 [2024-07-25 01:24:16.438450] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.019 [2024-07-25 01:24:16.438453] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x237cec0) 00:23:54.019 [2024-07-25 01:24:16.438458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.019 [2024-07-25 01:24:16.438462] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:54.019 [2024-07-25 01:24:16.438473] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:54.019 [2024-07-25 01:24:16.438480] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.019 [2024-07-25 01:24:16.438483] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x237cec0) 00:23:54.019 [2024-07-25 01:24:16.438489] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.019 [2024-07-25 01:24:16.438503] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23ffe40, cid 0, qid 0 00:23:54.019 [2024-07-25 01:24:16.438507] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23fffc0, cid 1, qid 0 00:23:54.019 [2024-07-25 01:24:16.438511] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2400140, cid 2, qid 0 00:23:54.019 [2024-07-25 01:24:16.438515] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24002c0, cid 3, qid 0 00:23:54.019 [2024-07-25 01:24:16.438519] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2400440, cid 4, qid 0 00:23:54.019 [2024-07-25 01:24:16.438702] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.019 [2024-07-25 01:24:16.438712] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.019 [2024-07-25 01:24:16.438715] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.019 [2024-07-25 01:24:16.438718] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2400440) on tqpair=0x237cec0 00:23:54.019 [2024-07-25 01:24:16.438723] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:54.019 [2024-07-25 01:24:16.438728] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:54.019 [2024-07-25 01:24:16.438736] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:54.019 [2024-07-25 01:24:16.438742] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:54.019 [2024-07-25 01:24:16.438751] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.019 [2024-07-25 01:24:16.438754] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.019 [2024-07-25 01:24:16.438758] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x237cec0) 00:23:54.019 [2024-07-25 01:24:16.438764] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:54.019 [2024-07-25 01:24:16.438777] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2400440, cid 4, qid 0 00:23:54.019 [2024-07-25 01:24:16.438930] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.019 [2024-07-25 01:24:16.438940] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.019 [2024-07-25 01:24:16.438943] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.019 [2024-07-25 01:24:16.438946] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2400440) on tqpair=0x237cec0 00:23:54.019 [2024-07-25 01:24:16.438999] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:54.019 [2024-07-25 01:24:16.439009] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:54.019 [2024-07-25 01:24:16.439017] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.019 [2024-07-25 01:24:16.439020] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x237cec0) 00:23:54.019 [2024-07-25 01:24:16.439026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.019 [2024-07-25 01:24:16.439039] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2400440, cid 4, qid 0 00:23:54.019 [2024-07-25 01:24:16.439205] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:54.019 [2024-07-25 01:24:16.439215] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:54.019 [2024-07-25 01:24:16.439218] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:54.019 [2024-07-25 01:24:16.439222] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x237cec0): datao=0, datal=4096, cccid=4 00:23:54.019 [2024-07-25 01:24:16.439226] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2400440) on tqpair(0x237cec0): expected_datao=0, payload_size=4096 00:23:54.019 [2024-07-25 01:24:16.439229] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.019 [2024-07-25 01:24:16.439236] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:54.019 [2024-07-25 01:24:16.439239] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:54.019 [2024-07-25 01:24:16.439519] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.019 [2024-07-25 01:24:16.439525] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.019 [2024-07-25 01:24:16.439528] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.019 [2024-07-25 01:24:16.439531] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2400440) on tqpair=0x237cec0 00:23:54.019 [2024-07-25 01:24:16.439544] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:54.019 [2024-07-25 01:24:16.439552] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:54.019 [2024-07-25 01:24:16.439561] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:54.019 [2024-07-25 01:24:16.439568] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.019 [2024-07-25 01:24:16.439571] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x237cec0) 00:23:54.019 [2024-07-25 01:24:16.439577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.019 [2024-07-25 01:24:16.439592] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2400440, cid 4, qid 0 00:23:54.019 [2024-07-25 01:24:16.439756] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:54.019 [2024-07-25 01:24:16.439767] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:54.019 [2024-07-25 01:24:16.439770] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:54.019 [2024-07-25 01:24:16.439773] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x237cec0): datao=0, datal=4096, cccid=4 00:23:54.019 [2024-07-25 01:24:16.439777] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2400440) on tqpair(0x237cec0): expected_datao=0, payload_size=4096 00:23:54.019 [2024-07-25 01:24:16.439780] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.019 [2024-07-25 01:24:16.439787] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:54.019 [2024-07-25 01:24:16.439790] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:54.019 [2024-07-25 01:24:16.440063] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.019 [2024-07-25 01:24:16.440069] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.019 [2024-07-25 01:24:16.440071] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.019 [2024-07-25 01:24:16.440075] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2400440) on tqpair=0x237cec0 00:23:54.019 [2024-07-25 01:24:16.440087] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:54.019 [2024-07-25 01:24:16.440097] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:54.019 [2024-07-25 01:24:16.440104] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.019 [2024-07-25 01:24:16.440108] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x237cec0) 00:23:54.019 [2024-07-25 01:24:16.440114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.019 [2024-07-25 01:24:16.440127] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2400440, cid 4, qid 0 00:23:54.019 [2024-07-25 01:24:16.440286] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:54.019 [2024-07-25 01:24:16.440296] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:54.020 [2024-07-25 01:24:16.440299] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:54.020 [2024-07-25 01:24:16.440302] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x237cec0): datao=0, datal=4096, cccid=4 00:23:54.020 [2024-07-25 01:24:16.440306] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2400440) on tqpair(0x237cec0): expected_datao=0, payload_size=4096 00:23:54.020 [2024-07-25 01:24:16.440309] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.020 [2024-07-25 01:24:16.440316] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:54.020 [2024-07-25 01:24:16.440319] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:54.020 [2024-07-25 01:24:16.440591] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.020 [2024-07-25 01:24:16.440596] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.020 [2024-07-25 01:24:16.440599] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.020 [2024-07-25 01:24:16.440602] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2400440) on tqpair=0x237cec0 00:23:54.020 [2024-07-25 01:24:16.440609] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:54.020 [2024-07-25 01:24:16.440617] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:54.020 [2024-07-25 01:24:16.440627] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:54.020 [2024-07-25 01:24:16.440635] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:54.020 [2024-07-25 01:24:16.440640] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:54.020 [2024-07-25 01:24:16.440644] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:54.020 [2024-07-25 01:24:16.440648] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:54.020 [2024-07-25 01:24:16.440652] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:54.020 [2024-07-25 01:24:16.440656] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:54.020 [2024-07-25 01:24:16.440671] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.020 [2024-07-25 01:24:16.440674] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x237cec0) 00:23:54.020 [2024-07-25 01:24:16.440681] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.020 [2024-07-25 01:24:16.440687] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.020 [2024-07-25 01:24:16.440690] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.020 [2024-07-25 01:24:16.440693] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x237cec0) 00:23:54.020 [2024-07-25 01:24:16.440698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.020 [2024-07-25 01:24:16.440712] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2400440, cid 4, qid 0 00:23:54.020 [2024-07-25 01:24:16.440717] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24005c0, cid 5, qid 0 00:23:54.020 [2024-07-25 01:24:16.440883] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.020 [2024-07-25 01:24:16.440893] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.020 [2024-07-25 01:24:16.440896] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.020 [2024-07-25 01:24:16.440899] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2400440) on tqpair=0x237cec0 00:23:54.020 [2024-07-25 01:24:16.440905] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.020 [2024-07-25 01:24:16.440910] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.020 [2024-07-25 01:24:16.440913] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.020 [2024-07-25 01:24:16.440916] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24005c0) on tqpair=0x237cec0 00:23:54.020 [2024-07-25 01:24:16.440927] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.020 [2024-07-25 01:24:16.440931] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x237cec0) 00:23:54.020 [2024-07-25 01:24:16.440937] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.020 [2024-07-25 01:24:16.440949] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24005c0, cid 5, qid 0 00:23:54.020 [2024-07-25 01:24:16.441287] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.020 [2024-07-25 01:24:16.441293] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.020 [2024-07-25 01:24:16.441296] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.020 [2024-07-25 01:24:16.441299] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24005c0) on tqpair=0x237cec0 00:23:54.020 [2024-07-25 01:24:16.441307] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.020 [2024-07-25 01:24:16.441311] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x237cec0) 00:23:54.020 [2024-07-25 01:24:16.441316] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.020 [2024-07-25 01:24:16.441329] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24005c0, cid 5, qid 0 00:23:54.020 [2024-07-25 01:24:16.441476] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.020 [2024-07-25 01:24:16.441485] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.020 [2024-07-25 01:24:16.441488] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.020 [2024-07-25 01:24:16.441492] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24005c0) on tqpair=0x237cec0 00:23:54.020 [2024-07-25 01:24:16.441502] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.020 [2024-07-25 01:24:16.441506] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x237cec0) 00:23:54.020 [2024-07-25 01:24:16.441512] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.020 [2024-07-25 01:24:16.441525] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24005c0, cid 5, qid 0 00:23:54.020 [2024-07-25 01:24:16.441682] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.020 [2024-07-25 01:24:16.441691] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.020 [2024-07-25 01:24:16.441694] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.020 [2024-07-25 01:24:16.441698] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24005c0) on tqpair=0x237cec0 00:23:54.020 [2024-07-25 01:24:16.441714] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.020 [2024-07-25 01:24:16.441718] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x237cec0) 00:23:54.020 [2024-07-25 01:24:16.441725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.020 [2024-07-25 01:24:16.441731] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.020 [2024-07-25 01:24:16.441734] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x237cec0) 00:23:54.020 [2024-07-25 01:24:16.441739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.020 [2024-07-25 01:24:16.441745] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.020 [2024-07-25 01:24:16.441749] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x237cec0) 00:23:54.020 [2024-07-25 01:24:16.441754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.020 [2024-07-25 01:24:16.441760] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.020 [2024-07-25 01:24:16.441763] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x237cec0) 00:23:54.020 [2024-07-25 01:24:16.441769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.020 [2024-07-25 01:24:16.441782] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24005c0, cid 5, qid 0 00:23:54.020 [2024-07-25 01:24:16.441786] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2400440, cid 4, qid 0 00:23:54.020 [2024-07-25 01:24:16.441791] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2400740, cid 6, qid 0 00:23:54.020 [2024-07-25 01:24:16.441795] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24008c0, cid 7, qid 0 00:23:54.020 [2024-07-25 01:24:16.442206] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:54.020 [2024-07-25 01:24:16.442217] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:54.020 [2024-07-25 01:24:16.442220] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:54.020 [2024-07-25 01:24:16.442224] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x237cec0): datao=0, datal=8192, cccid=5 00:23:54.020 [2024-07-25 01:24:16.442230] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24005c0) on tqpair(0x237cec0): expected_datao=0, payload_size=8192 00:23:54.020 [2024-07-25 01:24:16.442234] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.020 [2024-07-25 01:24:16.442240] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:54.020 [2024-07-25 01:24:16.442244] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:54.020 [2024-07-25 01:24:16.442248] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:54.020 [2024-07-25 01:24:16.442253] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:54.020 [2024-07-25 01:24:16.442256] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:54.020 [2024-07-25 01:24:16.442259] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x237cec0): datao=0, datal=512, cccid=4 00:23:54.020 [2024-07-25 01:24:16.442263] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2400440) on tqpair(0x237cec0): expected_datao=0, payload_size=512 00:23:54.020 [2024-07-25 01:24:16.442266] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.020 [2024-07-25 01:24:16.442272] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:54.020 [2024-07-25 01:24:16.442274] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:54.020 [2024-07-25 01:24:16.442279] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:54.020 [2024-07-25 01:24:16.442284] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:54.020 [2024-07-25 01:24:16.442287] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:54.020 [2024-07-25 01:24:16.442290] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x237cec0): datao=0, datal=512, cccid=6 00:23:54.020 [2024-07-25 01:24:16.442293] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2400740) on tqpair(0x237cec0): expected_datao=0, payload_size=512 00:23:54.020 [2024-07-25 01:24:16.442297] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.020 [2024-07-25 01:24:16.442302] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:54.020 [2024-07-25 01:24:16.442305] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:54.021 [2024-07-25 01:24:16.442310] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:54.021 [2024-07-25 01:24:16.442315] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:54.021 [2024-07-25 01:24:16.442318] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:54.021 [2024-07-25 01:24:16.442321] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x237cec0): datao=0, datal=4096, cccid=7 00:23:54.021 [2024-07-25 01:24:16.442324] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24008c0) on tqpair(0x237cec0): expected_datao=0, payload_size=4096 00:23:54.021 [2024-07-25 01:24:16.442328] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.021 [2024-07-25 01:24:16.442333] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:54.021 [2024-07-25 01:24:16.442336] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:54.021 [2024-07-25 01:24:16.442514] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.021 [2024-07-25 01:24:16.442519] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.021 [2024-07-25 01:24:16.442522] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.021 [2024-07-25 01:24:16.442525] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24005c0) on tqpair=0x237cec0 00:23:54.021 [2024-07-25 01:24:16.442536] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.021 [2024-07-25 01:24:16.442541] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.021 [2024-07-25 01:24:16.442544] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.021 [2024-07-25 01:24:16.442548] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2400440) on tqpair=0x237cec0 00:23:54.021 [2024-07-25 01:24:16.442556] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.021 [2024-07-25 01:24:16.442561] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.021 [2024-07-25 01:24:16.442565] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.021 [2024-07-25 01:24:16.442568] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2400740) on tqpair=0x237cec0 00:23:54.021 [2024-07-25 01:24:16.442574] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.021 [2024-07-25 01:24:16.442579] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.021 [2024-07-25 01:24:16.442582] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.021 [2024-07-25 01:24:16.442585] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24008c0) on tqpair=0x237cec0 00:23:54.021 ===================================================== 00:23:54.021 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:54.021 ===================================================== 00:23:54.021 Controller Capabilities/Features 00:23:54.021 ================================ 00:23:54.021 Vendor ID: 8086 00:23:54.021 Subsystem Vendor ID: 8086 00:23:54.021 Serial Number: SPDK00000000000001 00:23:54.021 Model Number: SPDK bdev Controller 00:23:54.021 Firmware Version: 24.09 00:23:54.021 Recommended Arb Burst: 6 00:23:54.021 IEEE OUI Identifier: e4 d2 5c 00:23:54.021 Multi-path I/O 00:23:54.021 May have multiple subsystem ports: Yes 00:23:54.021 May have multiple controllers: Yes 00:23:54.021 Associated with SR-IOV VF: No 00:23:54.021 Max Data Transfer Size: 131072 00:23:54.021 Max Number of Namespaces: 32 00:23:54.021 Max Number of I/O Queues: 127 00:23:54.021 NVMe Specification Version (VS): 1.3 00:23:54.021 NVMe Specification Version (Identify): 1.3 00:23:54.021 Maximum Queue Entries: 128 00:23:54.021 Contiguous Queues Required: Yes 00:23:54.021 Arbitration Mechanisms Supported 00:23:54.021 Weighted Round Robin: Not Supported 00:23:54.021 Vendor Specific: Not Supported 00:23:54.021 Reset Timeout: 15000 ms 00:23:54.021 Doorbell Stride: 4 bytes 00:23:54.021 NVM Subsystem Reset: Not Supported 00:23:54.021 Command Sets Supported 00:23:54.021 NVM Command Set: Supported 00:23:54.021 Boot Partition: Not Supported 00:23:54.021 Memory Page Size Minimum: 4096 bytes 00:23:54.021 Memory Page Size Maximum: 4096 bytes 00:23:54.021 Persistent Memory Region: Not Supported 00:23:54.021 Optional Asynchronous Events Supported 00:23:54.021 Namespace Attribute Notices: Supported 00:23:54.021 Firmware Activation Notices: Not Supported 00:23:54.021 ANA Change Notices: Not Supported 00:23:54.021 PLE Aggregate Log Change Notices: Not Supported 00:23:54.021 LBA Status Info Alert Notices: Not Supported 00:23:54.021 EGE Aggregate Log Change Notices: Not Supported 00:23:54.021 Normal NVM Subsystem Shutdown event: Not Supported 00:23:54.021 Zone Descriptor Change Notices: Not Supported 00:23:54.021 Discovery Log Change Notices: Not Supported 00:23:54.021 Controller Attributes 00:23:54.021 128-bit Host Identifier: Supported 00:23:54.021 Non-Operational Permissive Mode: Not Supported 00:23:54.021 NVM Sets: Not Supported 00:23:54.021 Read Recovery Levels: Not Supported 00:23:54.021 Endurance Groups: Not Supported 00:23:54.021 Predictable Latency Mode: Not Supported 00:23:54.021 Traffic Based Keep ALive: Not Supported 00:23:54.021 Namespace Granularity: Not Supported 00:23:54.021 SQ Associations: Not Supported 00:23:54.021 UUID List: Not Supported 00:23:54.021 Multi-Domain Subsystem: Not Supported 00:23:54.021 Fixed Capacity Management: Not Supported 00:23:54.021 Variable Capacity Management: Not Supported 00:23:54.021 Delete Endurance Group: Not Supported 00:23:54.021 Delete NVM Set: Not Supported 00:23:54.021 Extended LBA Formats Supported: Not Supported 00:23:54.021 Flexible Data Placement Supported: Not Supported 00:23:54.021 00:23:54.021 Controller Memory Buffer Support 00:23:54.021 ================================ 00:23:54.021 Supported: No 00:23:54.021 00:23:54.021 Persistent Memory Region Support 00:23:54.021 ================================ 00:23:54.021 Supported: No 00:23:54.021 00:23:54.021 Admin Command Set Attributes 00:23:54.021 ============================ 00:23:54.021 Security Send/Receive: Not Supported 00:23:54.021 Format NVM: Not Supported 00:23:54.021 Firmware Activate/Download: Not Supported 00:23:54.021 Namespace Management: Not Supported 00:23:54.021 Device Self-Test: Not Supported 00:23:54.021 Directives: Not Supported 00:23:54.021 NVMe-MI: Not Supported 00:23:54.021 Virtualization Management: Not Supported 00:23:54.021 Doorbell Buffer Config: Not Supported 00:23:54.021 Get LBA Status Capability: Not Supported 00:23:54.021 Command & Feature Lockdown Capability: Not Supported 00:23:54.021 Abort Command Limit: 4 00:23:54.021 Async Event Request Limit: 4 00:23:54.021 Number of Firmware Slots: N/A 00:23:54.021 Firmware Slot 1 Read-Only: N/A 00:23:54.021 Firmware Activation Without Reset: N/A 00:23:54.021 Multiple Update Detection Support: N/A 00:23:54.021 Firmware Update Granularity: No Information Provided 00:23:54.021 Per-Namespace SMART Log: No 00:23:54.021 Asymmetric Namespace Access Log Page: Not Supported 00:23:54.021 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:54.021 Command Effects Log Page: Supported 00:23:54.021 Get Log Page Extended Data: Supported 00:23:54.021 Telemetry Log Pages: Not Supported 00:23:54.021 Persistent Event Log Pages: Not Supported 00:23:54.021 Supported Log Pages Log Page: May Support 00:23:54.021 Commands Supported & Effects Log Page: Not Supported 00:23:54.021 Feature Identifiers & Effects Log Page:May Support 00:23:54.021 NVMe-MI Commands & Effects Log Page: May Support 00:23:54.021 Data Area 4 for Telemetry Log: Not Supported 00:23:54.021 Error Log Page Entries Supported: 128 00:23:54.021 Keep Alive: Supported 00:23:54.021 Keep Alive Granularity: 10000 ms 00:23:54.021 00:23:54.021 NVM Command Set Attributes 00:23:54.021 ========================== 00:23:54.021 Submission Queue Entry Size 00:23:54.021 Max: 64 00:23:54.021 Min: 64 00:23:54.021 Completion Queue Entry Size 00:23:54.021 Max: 16 00:23:54.021 Min: 16 00:23:54.021 Number of Namespaces: 32 00:23:54.021 Compare Command: Supported 00:23:54.021 Write Uncorrectable Command: Not Supported 00:23:54.021 Dataset Management Command: Supported 00:23:54.021 Write Zeroes Command: Supported 00:23:54.021 Set Features Save Field: Not Supported 00:23:54.021 Reservations: Supported 00:23:54.021 Timestamp: Not Supported 00:23:54.021 Copy: Supported 00:23:54.021 Volatile Write Cache: Present 00:23:54.021 Atomic Write Unit (Normal): 1 00:23:54.021 Atomic Write Unit (PFail): 1 00:23:54.021 Atomic Compare & Write Unit: 1 00:23:54.021 Fused Compare & Write: Supported 00:23:54.021 Scatter-Gather List 00:23:54.021 SGL Command Set: Supported 00:23:54.021 SGL Keyed: Supported 00:23:54.021 SGL Bit Bucket Descriptor: Not Supported 00:23:54.021 SGL Metadata Pointer: Not Supported 00:23:54.021 Oversized SGL: Not Supported 00:23:54.021 SGL Metadata Address: Not Supported 00:23:54.021 SGL Offset: Supported 00:23:54.021 Transport SGL Data Block: Not Supported 00:23:54.021 Replay Protected Memory Block: Not Supported 00:23:54.021 00:23:54.021 Firmware Slot Information 00:23:54.021 ========================= 00:23:54.021 Active slot: 1 00:23:54.021 Slot 1 Firmware Revision: 24.09 00:23:54.021 00:23:54.021 00:23:54.021 Commands Supported and Effects 00:23:54.021 ============================== 00:23:54.021 Admin Commands 00:23:54.021 -------------- 00:23:54.021 Get Log Page (02h): Supported 00:23:54.021 Identify (06h): Supported 00:23:54.022 Abort (08h): Supported 00:23:54.022 Set Features (09h): Supported 00:23:54.022 Get Features (0Ah): Supported 00:23:54.022 Asynchronous Event Request (0Ch): Supported 00:23:54.022 Keep Alive (18h): Supported 00:23:54.022 I/O Commands 00:23:54.022 ------------ 00:23:54.022 Flush (00h): Supported LBA-Change 00:23:54.022 Write (01h): Supported LBA-Change 00:23:54.022 Read (02h): Supported 00:23:54.022 Compare (05h): Supported 00:23:54.022 Write Zeroes (08h): Supported LBA-Change 00:23:54.022 Dataset Management (09h): Supported LBA-Change 00:23:54.022 Copy (19h): Supported LBA-Change 00:23:54.022 00:23:54.022 Error Log 00:23:54.022 ========= 00:23:54.022 00:23:54.022 Arbitration 00:23:54.022 =========== 00:23:54.022 Arbitration Burst: 1 00:23:54.022 00:23:54.022 Power Management 00:23:54.022 ================ 00:23:54.022 Number of Power States: 1 00:23:54.022 Current Power State: Power State #0 00:23:54.022 Power State #0: 00:23:54.022 Max Power: 0.00 W 00:23:54.022 Non-Operational State: Operational 00:23:54.022 Entry Latency: Not Reported 00:23:54.022 Exit Latency: Not Reported 00:23:54.022 Relative Read Throughput: 0 00:23:54.022 Relative Read Latency: 0 00:23:54.022 Relative Write Throughput: 0 00:23:54.022 Relative Write Latency: 0 00:23:54.022 Idle Power: Not Reported 00:23:54.022 Active Power: Not Reported 00:23:54.022 Non-Operational Permissive Mode: Not Supported 00:23:54.022 00:23:54.022 Health Information 00:23:54.022 ================== 00:23:54.022 Critical Warnings: 00:23:54.022 Available Spare Space: OK 00:23:54.022 Temperature: OK 00:23:54.022 Device Reliability: OK 00:23:54.022 Read Only: No 00:23:54.022 Volatile Memory Backup: OK 00:23:54.022 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:54.022 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:54.022 Available Spare: 0% 00:23:54.022 Available Spare Threshold: 0% 00:23:54.022 Life Percentage Used:[2024-07-25 01:24:16.442669] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.022 [2024-07-25 01:24:16.442674] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x237cec0) 00:23:54.022 [2024-07-25 01:24:16.442680] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.022 [2024-07-25 01:24:16.442693] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24008c0, cid 7, qid 0 00:23:54.022 [2024-07-25 01:24:16.442860] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.022 [2024-07-25 01:24:16.442870] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.022 [2024-07-25 01:24:16.442873] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.022 [2024-07-25 01:24:16.442876] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24008c0) on tqpair=0x237cec0 00:23:54.022 [2024-07-25 01:24:16.442909] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:54.022 [2024-07-25 01:24:16.442919] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23ffe40) on tqpair=0x237cec0 00:23:54.022 [2024-07-25 01:24:16.442924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.022 [2024-07-25 01:24:16.442929] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23fffc0) on tqpair=0x237cec0 00:23:54.022 [2024-07-25 01:24:16.442933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.022 [2024-07-25 01:24:16.442937] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2400140) on tqpair=0x237cec0 00:23:54.022 [2024-07-25 01:24:16.442941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.022 [2024-07-25 01:24:16.442945] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24002c0) on tqpair=0x237cec0 00:23:54.022 [2024-07-25 01:24:16.442949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.022 [2024-07-25 01:24:16.442956] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.022 [2024-07-25 01:24:16.442959] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.022 [2024-07-25 01:24:16.442962] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x237cec0) 00:23:54.022 [2024-07-25 01:24:16.442968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.022 [2024-07-25 01:24:16.442983] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24002c0, cid 3, qid 0 00:23:54.022 [2024-07-25 01:24:16.443134] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.022 [2024-07-25 01:24:16.443144] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.022 [2024-07-25 01:24:16.443147] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.022 [2024-07-25 01:24:16.443150] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24002c0) on tqpair=0x237cec0 00:23:54.022 [2024-07-25 01:24:16.443157] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.022 [2024-07-25 01:24:16.443160] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.022 [2024-07-25 01:24:16.443164] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x237cec0) 00:23:54.022 [2024-07-25 01:24:16.443173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.022 [2024-07-25 01:24:16.443190] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24002c0, cid 3, qid 0 00:23:54.022 [2024-07-25 01:24:16.443345] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.022 [2024-07-25 01:24:16.443354] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.022 [2024-07-25 01:24:16.443357] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.022 [2024-07-25 01:24:16.443361] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24002c0) on tqpair=0x237cec0 00:23:54.022 [2024-07-25 01:24:16.443365] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:54.022 [2024-07-25 01:24:16.443368] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:54.022 [2024-07-25 01:24:16.443379] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.022 [2024-07-25 01:24:16.443383] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.022 [2024-07-25 01:24:16.443386] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x237cec0) 00:23:54.022 [2024-07-25 01:24:16.443392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.022 [2024-07-25 01:24:16.443404] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24002c0, cid 3, qid 0 00:23:54.022 [2024-07-25 01:24:16.443549] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.022 [2024-07-25 01:24:16.443558] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.022 [2024-07-25 01:24:16.443561] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.022 [2024-07-25 01:24:16.443565] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24002c0) on tqpair=0x237cec0 00:23:54.022 [2024-07-25 01:24:16.443576] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.022 [2024-07-25 01:24:16.443580] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.022 [2024-07-25 01:24:16.443583] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x237cec0) 00:23:54.022 [2024-07-25 01:24:16.443589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.022 [2024-07-25 01:24:16.443601] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24002c0, cid 3, qid 0 00:23:54.022 [2024-07-25 01:24:16.443754] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.022 [2024-07-25 01:24:16.443763] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.022 [2024-07-25 01:24:16.443766] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.022 [2024-07-25 01:24:16.443769] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24002c0) on tqpair=0x237cec0 00:23:54.022 [2024-07-25 01:24:16.443780] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.022 [2024-07-25 01:24:16.443783] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.022 [2024-07-25 01:24:16.443786] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x237cec0) 00:23:54.022 [2024-07-25 01:24:16.443793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.022 [2024-07-25 01:24:16.443805] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24002c0, cid 3, qid 0 00:23:54.022 [2024-07-25 01:24:16.443951] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.022 [2024-07-25 01:24:16.443960] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.022 [2024-07-25 01:24:16.443963] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.022 [2024-07-25 01:24:16.443966] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24002c0) on tqpair=0x237cec0 00:23:54.022 [2024-07-25 01:24:16.443977] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.022 [2024-07-25 01:24:16.443981] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.022 [2024-07-25 01:24:16.443986] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x237cec0) 00:23:54.022 [2024-07-25 01:24:16.443993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.022 [2024-07-25 01:24:16.444005] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24002c0, cid 3, qid 0 00:23:54.022 [2024-07-25 01:24:16.448052] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.022 [2024-07-25 01:24:16.448059] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.022 [2024-07-25 01:24:16.448062] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.022 [2024-07-25 01:24:16.448065] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24002c0) on tqpair=0x237cec0 00:23:54.022 [2024-07-25 01:24:16.448074] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.022 [2024-07-25 01:24:16.448078] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.022 [2024-07-25 01:24:16.448081] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x237cec0) 00:23:54.022 [2024-07-25 01:24:16.448087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.022 [2024-07-25 01:24:16.448098] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24002c0, cid 3, qid 0 00:23:54.023 [2024-07-25 01:24:16.448333] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.023 [2024-07-25 01:24:16.448343] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.023 [2024-07-25 01:24:16.448346] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.023 [2024-07-25 01:24:16.448349] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24002c0) on tqpair=0x237cec0 00:23:54.023 [2024-07-25 01:24:16.448357] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:23:54.023 0% 00:23:54.023 Data Units Read: 0 00:23:54.023 Data Units Written: 0 00:23:54.023 Host Read Commands: 0 00:23:54.023 Host Write Commands: 0 00:23:54.023 Controller Busy Time: 0 minutes 00:23:54.023 Power Cycles: 0 00:23:54.023 Power On Hours: 0 hours 00:23:54.023 Unsafe Shutdowns: 0 00:23:54.023 Unrecoverable Media Errors: 0 00:23:54.023 Lifetime Error Log Entries: 0 00:23:54.023 Warning Temperature Time: 0 minutes 00:23:54.023 Critical Temperature Time: 0 minutes 00:23:54.023 00:23:54.023 Number of Queues 00:23:54.023 ================ 00:23:54.023 Number of I/O Submission Queues: 127 00:23:54.023 Number of I/O Completion Queues: 127 00:23:54.023 00:23:54.023 Active Namespaces 00:23:54.023 ================= 00:23:54.023 Namespace ID:1 00:23:54.023 Error Recovery Timeout: Unlimited 00:23:54.023 Command Set Identifier: NVM (00h) 00:23:54.023 Deallocate: Supported 00:23:54.023 Deallocated/Unwritten Error: Not Supported 00:23:54.023 Deallocated Read Value: Unknown 00:23:54.023 Deallocate in Write Zeroes: Not Supported 00:23:54.023 Deallocated Guard Field: 0xFFFF 00:23:54.023 Flush: Supported 00:23:54.023 Reservation: Supported 00:23:54.023 Namespace Sharing Capabilities: Multiple Controllers 00:23:54.023 Size (in LBAs): 131072 (0GiB) 00:23:54.023 Capacity (in LBAs): 131072 (0GiB) 00:23:54.023 Utilization (in LBAs): 131072 (0GiB) 00:23:54.023 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:54.023 EUI64: ABCDEF0123456789 00:23:54.023 UUID: 262b3f84-16de-4fb7-b239-95a8e7ae3489 00:23:54.023 Thin Provisioning: Not Supported 00:23:54.023 Per-NS Atomic Units: Yes 00:23:54.023 Atomic Boundary Size (Normal): 0 00:23:54.023 Atomic Boundary Size (PFail): 0 00:23:54.023 Atomic Boundary Offset: 0 00:23:54.023 Maximum Single Source Range Length: 65535 00:23:54.023 Maximum Copy Length: 65535 00:23:54.023 Maximum Source Range Count: 1 00:23:54.023 NGUID/EUI64 Never Reused: No 00:23:54.023 Namespace Write Protected: No 00:23:54.023 Number of LBA Formats: 1 00:23:54.023 Current LBA Format: LBA Format #00 00:23:54.023 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:54.023 00:23:54.023 01:24:16 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:54.023 01:24:16 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:54.023 01:24:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.023 01:24:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:54.023 01:24:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.023 01:24:16 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:54.023 01:24:16 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:54.023 01:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:54.023 01:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:23:54.023 01:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:54.023 01:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:23:54.023 01:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:54.023 01:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:54.023 rmmod nvme_tcp 00:23:54.023 rmmod nvme_fabrics 00:23:54.283 rmmod nvme_keyring 00:23:54.283 01:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:54.283 01:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:23:54.283 01:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:23:54.283 01:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 980193 ']' 00:23:54.283 01:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 980193 00:23:54.283 01:24:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 980193 ']' 00:23:54.283 01:24:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 980193 00:23:54.283 01:24:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:23:54.283 01:24:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:54.283 01:24:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 980193 00:23:54.283 01:24:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:54.283 01:24:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:54.283 01:24:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 980193' 00:23:54.283 killing process with pid 980193 00:23:54.283 01:24:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 980193 00:23:54.283 01:24:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 980193 00:23:54.543 01:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:54.543 01:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:54.543 01:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:54.543 01:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:54.543 01:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:54.543 01:24:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.543 01:24:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:54.543 01:24:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.453 01:24:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:56.453 00:23:56.453 real 0m8.802s 00:23:56.453 user 0m7.341s 00:23:56.453 sys 0m4.070s 00:23:56.453 01:24:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:56.453 01:24:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:56.453 ************************************ 00:23:56.453 END TEST nvmf_identify 00:23:56.453 ************************************ 00:23:56.453 01:24:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:56.453 01:24:18 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:56.453 01:24:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:56.453 01:24:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:56.453 01:24:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:56.453 ************************************ 00:23:56.453 START TEST nvmf_perf 00:23:56.453 ************************************ 00:23:56.453 01:24:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:56.714 * Looking for test storage... 00:23:56.714 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:56.714 01:24:18 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:56.714 01:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:56.714 01:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:56.714 01:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:56.714 01:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:56.714 01:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:56.714 01:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:56.714 01:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:56.714 01:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:56.714 01:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:56.714 01:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:56.714 01:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:56.714 01:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:56.714 01:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:56.714 01:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:56.714 01:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:56.714 01:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:56.714 01:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:56.714 01:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:56.714 01:24:19 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:56.714 01:24:19 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:56.714 01:24:19 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:56.714 01:24:19 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.715 01:24:19 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.715 01:24:19 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.715 01:24:19 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:56.715 01:24:19 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.715 01:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:23:56.715 01:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:56.715 01:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:56.715 01:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:56.715 01:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:56.715 01:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:56.715 01:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:56.715 01:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:56.715 01:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:56.715 01:24:19 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:56.715 01:24:19 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:56.715 01:24:19 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:56.715 01:24:19 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:56.715 01:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:56.715 01:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:56.715 01:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:56.715 01:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:56.715 01:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:56.715 01:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.715 01:24:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:56.715 01:24:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.715 01:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:56.715 01:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:56.715 01:24:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:23:56.715 01:24:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:01.994 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:01.994 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:01.994 Found net devices under 0000:86:00.0: cvl_0_0 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.994 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:01.995 Found net devices under 0000:86:00.1: cvl_0_1 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:01.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:01.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:24:01.995 00:24:01.995 --- 10.0.0.2 ping statistics --- 00:24:01.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.995 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:01.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:01.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.447 ms 00:24:01.995 00:24:01.995 --- 10.0.0.1 ping statistics --- 00:24:01.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.995 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=983738 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 983738 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 983738 ']' 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:01.995 01:24:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:01.995 [2024-07-25 01:24:24.440171] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:24:01.995 [2024-07-25 01:24:24.440212] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:01.995 EAL: No free 2048 kB hugepages reported on node 1 00:24:02.254 [2024-07-25 01:24:24.495572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:02.254 [2024-07-25 01:24:24.576381] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:02.254 [2024-07-25 01:24:24.576428] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:02.254 [2024-07-25 01:24:24.576436] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:02.254 [2024-07-25 01:24:24.576442] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:02.254 [2024-07-25 01:24:24.576447] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:02.254 [2024-07-25 01:24:24.576494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:02.254 [2024-07-25 01:24:24.576513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:02.254 [2024-07-25 01:24:24.576601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:02.254 [2024-07-25 01:24:24.576602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.833 01:24:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:02.833 01:24:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:24:02.833 01:24:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:02.833 01:24:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:02.833 01:24:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:02.833 01:24:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:02.833 01:24:25 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:02.833 01:24:25 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:06.122 01:24:28 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:06.122 01:24:28 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:06.123 01:24:28 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:24:06.123 01:24:28 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:06.383 01:24:28 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:06.383 01:24:28 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:24:06.383 01:24:28 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:06.383 01:24:28 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:06.383 01:24:28 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:06.383 [2024-07-25 01:24:28.836629] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:06.383 01:24:28 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:06.643 01:24:29 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:06.643 01:24:29 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:06.903 01:24:29 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:06.903 01:24:29 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:07.164 01:24:29 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:07.164 [2024-07-25 01:24:29.573527] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:07.164 01:24:29 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:07.436 01:24:29 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:24:07.436 01:24:29 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:24:07.436 01:24:29 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:07.436 01:24:29 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:24:08.869 Initializing NVMe Controllers 00:24:08.869 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:24:08.869 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:24:08.869 Initialization complete. Launching workers. 00:24:08.869 ======================================================== 00:24:08.869 Latency(us) 00:24:08.869 Device Information : IOPS MiB/s Average min max 00:24:08.869 PCIE (0000:5e:00.0) NSID 1 from core 0: 97496.22 380.84 327.82 20.60 8177.15 00:24:08.869 ======================================================== 00:24:08.869 Total : 97496.22 380.84 327.82 20.60 8177.15 00:24:08.869 00:24:08.869 01:24:31 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:08.869 EAL: No free 2048 kB hugepages reported on node 1 00:24:09.808 Initializing NVMe Controllers 00:24:09.808 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:09.808 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:09.808 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:09.808 Initialization complete. Launching workers. 00:24:09.808 ======================================================== 00:24:09.808 Latency(us) 00:24:09.808 Device Information : IOPS MiB/s Average min max 00:24:09.808 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 114.59 0.45 8856.26 589.97 45495.63 00:24:09.808 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 46.83 0.18 21521.02 5961.46 49870.54 00:24:09.808 ======================================================== 00:24:09.808 Total : 161.43 0.63 12530.61 589.97 49870.54 00:24:09.808 00:24:09.808 01:24:32 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:09.808 EAL: No free 2048 kB hugepages reported on node 1 00:24:11.714 Initializing NVMe Controllers 00:24:11.714 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:11.714 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:11.714 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:11.714 Initialization complete. Launching workers. 00:24:11.714 ======================================================== 00:24:11.714 Latency(us) 00:24:11.714 Device Information : IOPS MiB/s Average min max 00:24:11.714 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7426.85 29.01 4313.21 765.23 12251.82 00:24:11.714 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3880.92 15.16 8301.29 7185.42 16096.03 00:24:11.714 ======================================================== 00:24:11.714 Total : 11307.77 44.17 5681.95 765.23 16096.03 00:24:11.714 00:24:11.714 01:24:33 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:11.714 01:24:33 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:11.714 01:24:33 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:11.714 EAL: No free 2048 kB hugepages reported on node 1 00:24:14.252 Initializing NVMe Controllers 00:24:14.252 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:14.252 Controller IO queue size 128, less than required. 00:24:14.252 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:14.252 Controller IO queue size 128, less than required. 00:24:14.252 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:14.252 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:14.252 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:14.252 Initialization complete. Launching workers. 00:24:14.252 ======================================================== 00:24:14.252 Latency(us) 00:24:14.252 Device Information : IOPS MiB/s Average min max 00:24:14.252 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 796.99 199.25 166009.37 96517.14 240228.58 00:24:14.252 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 587.00 146.75 228583.13 85720.44 378963.99 00:24:14.252 ======================================================== 00:24:14.252 Total : 1383.99 346.00 192548.96 85720.44 378963.99 00:24:14.252 00:24:14.252 01:24:36 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:14.252 EAL: No free 2048 kB hugepages reported on node 1 00:24:14.252 No valid NVMe controllers or AIO or URING devices found 00:24:14.252 Initializing NVMe Controllers 00:24:14.252 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:14.252 Controller IO queue size 128, less than required. 00:24:14.252 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:14.252 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:14.252 Controller IO queue size 128, less than required. 00:24:14.252 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:14.252 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:14.252 WARNING: Some requested NVMe devices were skipped 00:24:14.252 01:24:36 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:14.252 EAL: No free 2048 kB hugepages reported on node 1 00:24:16.786 Initializing NVMe Controllers 00:24:16.786 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:16.786 Controller IO queue size 128, less than required. 00:24:16.786 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:16.786 Controller IO queue size 128, less than required. 00:24:16.786 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:16.786 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:16.786 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:16.786 Initialization complete. Launching workers. 00:24:16.786 00:24:16.786 ==================== 00:24:16.786 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:16.786 TCP transport: 00:24:16.786 polls: 58039 00:24:16.786 idle_polls: 20110 00:24:16.786 sock_completions: 37929 00:24:16.786 nvme_completions: 3369 00:24:16.786 submitted_requests: 5052 00:24:16.786 queued_requests: 1 00:24:16.786 00:24:16.786 ==================== 00:24:16.786 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:16.786 TCP transport: 00:24:16.786 polls: 61075 00:24:16.786 idle_polls: 20854 00:24:16.786 sock_completions: 40221 00:24:16.786 nvme_completions: 3333 00:24:16.786 submitted_requests: 5006 00:24:16.786 queued_requests: 1 00:24:16.786 ======================================================== 00:24:16.786 Latency(us) 00:24:16.786 Device Information : IOPS MiB/s Average min max 00:24:16.786 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 842.00 210.50 157524.65 75700.80 284645.01 00:24:16.786 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 833.00 208.25 158870.56 77361.73 271512.73 00:24:16.786 ======================================================== 00:24:16.786 Total : 1675.00 418.75 158193.99 75700.80 284645.01 00:24:16.786 00:24:16.786 01:24:39 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:16.786 01:24:39 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:16.786 01:24:39 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:16.786 01:24:39 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:16.786 01:24:39 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:16.786 01:24:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:16.786 01:24:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:24:16.786 01:24:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:16.786 01:24:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:24:16.786 01:24:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:16.786 01:24:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:16.787 rmmod nvme_tcp 00:24:16.787 rmmod nvme_fabrics 00:24:16.787 rmmod nvme_keyring 00:24:16.787 01:24:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:16.787 01:24:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:24:16.787 01:24:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:24:16.787 01:24:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 983738 ']' 00:24:16.787 01:24:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 983738 00:24:16.787 01:24:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 983738 ']' 00:24:16.787 01:24:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 983738 00:24:16.787 01:24:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:24:16.787 01:24:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:16.787 01:24:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 983738 00:24:17.047 01:24:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:17.047 01:24:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:17.047 01:24:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 983738' 00:24:17.047 killing process with pid 983738 00:24:17.047 01:24:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 983738 00:24:17.047 01:24:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 983738 00:24:18.428 01:24:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:18.428 01:24:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:18.428 01:24:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:18.428 01:24:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:18.428 01:24:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:18.428 01:24:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.428 01:24:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:18.428 01:24:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.967 01:24:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:20.967 00:24:20.967 real 0m23.977s 00:24:20.968 user 1m5.489s 00:24:20.968 sys 0m6.594s 00:24:20.968 01:24:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:20.968 01:24:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:20.968 ************************************ 00:24:20.968 END TEST nvmf_perf 00:24:20.968 ************************************ 00:24:20.968 01:24:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:20.968 01:24:42 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:20.968 01:24:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:20.968 01:24:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:20.968 01:24:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:20.968 ************************************ 00:24:20.968 START TEST nvmf_fio_host 00:24:20.968 ************************************ 00:24:20.968 01:24:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:20.968 * Looking for test storage... 00:24:20.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:20.968 01:24:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:26.252 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:26.252 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:26.252 Found net devices under 0000:86:00.0: cvl_0_0 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:26.252 Found net devices under 0000:86:00.1: cvl_0_1 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:26.252 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:26.253 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:26.253 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:26.253 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:26.253 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:26.253 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:26.253 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:26.253 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:26.253 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:26.253 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:26.253 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:26.253 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:26.253 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:26.253 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:26.253 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:26.253 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:26.513 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:26.513 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:26.513 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:26.513 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:26.513 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:24:26.513 00:24:26.513 --- 10.0.0.2 ping statistics --- 00:24:26.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.513 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:24:26.513 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:26.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:26.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:24:26.513 00:24:26.513 --- 10.0.0.1 ping statistics --- 00:24:26.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.513 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:24:26.513 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:26.513 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:24:26.513 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:26.513 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:26.513 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:26.513 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:26.513 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:26.513 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:26.513 01:24:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:26.513 01:24:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:26.513 01:24:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:26.513 01:24:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:26.513 01:24:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.513 01:24:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=990051 00:24:26.513 01:24:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:26.513 01:24:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:26.513 01:24:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 990051 00:24:26.513 01:24:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 990051 ']' 00:24:26.513 01:24:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:26.513 01:24:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:26.513 01:24:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:26.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:26.513 01:24:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:26.513 01:24:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.513 [2024-07-25 01:24:48.919391] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:24:26.513 [2024-07-25 01:24:48.919434] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:26.513 EAL: No free 2048 kB hugepages reported on node 1 00:24:26.513 [2024-07-25 01:24:48.977613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:26.773 [2024-07-25 01:24:49.051864] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:26.773 [2024-07-25 01:24:49.051907] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:26.773 [2024-07-25 01:24:49.051913] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:26.773 [2024-07-25 01:24:49.051919] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:26.773 [2024-07-25 01:24:49.051925] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:26.773 [2024-07-25 01:24:49.051972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:26.773 [2024-07-25 01:24:49.051989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:26.773 [2024-07-25 01:24:49.052080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:26.773 [2024-07-25 01:24:49.052082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.342 01:24:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:27.342 01:24:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:24:27.342 01:24:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:27.601 [2024-07-25 01:24:49.879301] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:27.601 01:24:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:27.601 01:24:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:27.601 01:24:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.601 01:24:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:27.861 Malloc1 00:24:27.861 01:24:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:27.861 01:24:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:28.120 01:24:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:28.382 [2024-07-25 01:24:50.681670] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:28.382 01:24:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:28.686 01:24:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:28.686 01:24:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:28.686 01:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:28.686 01:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:28.686 01:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:28.686 01:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:28.686 01:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:28.686 01:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:28.686 01:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:28.686 01:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:28.686 01:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:28.686 01:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:28.686 01:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:28.686 01:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:28.686 01:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:28.686 01:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:28.686 01:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:28.686 01:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:28.686 01:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:28.686 01:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:28.686 01:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:28.686 01:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:28.686 01:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:28.943 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:28.943 fio-3.35 00:24:28.943 Starting 1 thread 00:24:28.943 EAL: No free 2048 kB hugepages reported on node 1 00:24:31.507 00:24:31.507 test: (groupid=0, jobs=1): err= 0: pid=990448: Thu Jul 25 01:24:53 2024 00:24:31.507 read: IOPS=11.3k, BW=44.3MiB/s (46.4MB/s)(88.7MiB/2004msec) 00:24:31.507 slat (nsec): min=1591, max=178517, avg=1763.43, stdev=1631.13 00:24:31.507 clat (usec): min=3221, max=20327, avg=6591.38, stdev=1507.34 00:24:31.507 lat (usec): min=3223, max=20335, avg=6593.14, stdev=1507.49 00:24:31.507 clat percentiles (usec): 00:24:31.507 | 1.00th=[ 4424], 5.00th=[ 5014], 10.00th=[ 5276], 20.00th=[ 5669], 00:24:31.507 | 30.00th=[ 5866], 40.00th=[ 6063], 50.00th=[ 6259], 60.00th=[ 6456], 00:24:31.507 | 70.00th=[ 6783], 80.00th=[ 7177], 90.00th=[ 8160], 95.00th=[ 9634], 00:24:31.507 | 99.00th=[12387], 99.50th=[14222], 99.90th=[18220], 99.95th=[18744], 00:24:31.507 | 99.99th=[20055] 00:24:31.507 bw ( KiB/s): min=42624, max=47000, per=99.80%, avg=45250.00, stdev=1966.10, samples=4 00:24:31.507 iops : min=10656, max=11750, avg=11312.50, stdev=491.52, samples=4 00:24:31.507 write: IOPS=11.3k, BW=44.0MiB/s (46.1MB/s)(88.1MiB/2004msec); 0 zone resets 00:24:31.507 slat (nsec): min=1654, max=156285, avg=1852.89, stdev=1183.92 00:24:31.507 clat (usec): min=1953, max=18341, avg=4650.14, stdev=991.81 00:24:31.507 lat (usec): min=1955, max=18349, avg=4651.99, stdev=992.07 00:24:31.507 clat percentiles (usec): 00:24:31.507 | 1.00th=[ 2900], 5.00th=[ 3359], 10.00th=[ 3621], 20.00th=[ 3982], 00:24:31.507 | 30.00th=[ 4228], 40.00th=[ 4490], 50.00th=[ 4621], 60.00th=[ 4817], 00:24:31.507 | 70.00th=[ 4948], 80.00th=[ 5145], 90.00th=[ 5473], 95.00th=[ 5932], 00:24:31.507 | 99.00th=[ 7504], 99.50th=[ 8848], 99.90th=[15401], 99.95th=[16909], 00:24:31.507 | 99.99th=[18220] 00:24:31.507 bw ( KiB/s): min=43216, max=46024, per=100.00%, avg=45050.00, stdev=1249.27, samples=4 00:24:31.507 iops : min=10804, max=11506, avg=11262.50, stdev=312.32, samples=4 00:24:31.507 lat (msec) : 2=0.01%, 4=10.43%, 10=87.44%, 20=2.12%, 50=0.01% 00:24:31.507 cpu : usr=69.30%, sys=24.41%, ctx=260, majf=0, minf=6 00:24:31.507 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:31.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:31.508 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:31.508 issued rwts: total=22716,22561,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:31.508 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:31.508 00:24:31.508 Run status group 0 (all jobs): 00:24:31.508 READ: bw=44.3MiB/s (46.4MB/s), 44.3MiB/s-44.3MiB/s (46.4MB/s-46.4MB/s), io=88.7MiB (93.0MB), run=2004-2004msec 00:24:31.508 WRITE: bw=44.0MiB/s (46.1MB/s), 44.0MiB/s-44.0MiB/s (46.1MB/s-46.1MB/s), io=88.1MiB (92.4MB), run=2004-2004msec 00:24:31.508 01:24:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:31.508 01:24:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:31.508 01:24:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:31.508 01:24:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:31.508 01:24:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:31.508 01:24:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:31.508 01:24:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:31.508 01:24:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:31.508 01:24:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:31.508 01:24:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:31.508 01:24:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:31.508 01:24:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:31.508 01:24:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:31.508 01:24:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:31.508 01:24:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:31.508 01:24:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:31.508 01:24:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:31.508 01:24:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:31.508 01:24:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:31.508 01:24:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:31.508 01:24:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:31.508 01:24:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:31.508 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:31.508 fio-3.35 00:24:31.508 Starting 1 thread 00:24:31.508 EAL: No free 2048 kB hugepages reported on node 1 00:24:34.035 00:24:34.035 test: (groupid=0, jobs=1): err= 0: pid=991012: Thu Jul 25 01:24:56 2024 00:24:34.035 read: IOPS=9083, BW=142MiB/s (149MB/s)(285MiB/2006msec) 00:24:34.035 slat (nsec): min=2569, max=96657, avg=2884.91, stdev=1353.13 00:24:34.035 clat (usec): min=2752, max=45022, avg=8674.04, stdev=3981.37 00:24:34.035 lat (usec): min=2755, max=45024, avg=8676.93, stdev=3981.80 00:24:34.035 clat percentiles (usec): 00:24:34.035 | 1.00th=[ 4178], 5.00th=[ 5014], 10.00th=[ 5538], 20.00th=[ 6390], 00:24:34.035 | 30.00th=[ 6915], 40.00th=[ 7439], 50.00th=[ 7963], 60.00th=[ 8455], 00:24:34.035 | 70.00th=[ 8979], 80.00th=[ 9765], 90.00th=[11600], 95.00th=[14091], 00:24:34.035 | 99.00th=[28181], 99.50th=[31327], 99.90th=[32113], 99.95th=[33162], 00:24:34.035 | 99.99th=[43779] 00:24:34.035 bw ( KiB/s): min=68448, max=80544, per=49.32%, avg=71672.00, stdev=5924.03, samples=4 00:24:34.035 iops : min= 4278, max= 5034, avg=4479.50, stdev=370.25, samples=4 00:24:34.035 write: IOPS=5274, BW=82.4MiB/s (86.4MB/s)(146MiB/1775msec); 0 zone resets 00:24:34.035 slat (usec): min=29, max=238, avg=32.04, stdev= 5.43 00:24:34.035 clat (usec): min=3960, max=36685, avg=9576.27, stdev=3931.96 00:24:34.035 lat (usec): min=3991, max=36718, avg=9608.31, stdev=3934.55 00:24:34.035 clat percentiles (usec): 00:24:34.035 | 1.00th=[ 6325], 5.00th=[ 6783], 10.00th=[ 7111], 20.00th=[ 7635], 00:24:34.035 | 30.00th=[ 8094], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[ 9241], 00:24:34.035 | 70.00th=[ 9634], 80.00th=[10159], 90.00th=[11076], 95.00th=[12911], 00:24:34.035 | 99.00th=[31851], 99.50th=[32113], 99.90th=[34341], 99.95th=[35390], 00:24:34.035 | 99.99th=[36439] 00:24:34.035 bw ( KiB/s): min=71200, max=83392, per=88.35%, avg=74560.00, stdev=5913.05, samples=4 00:24:34.035 iops : min= 4450, max= 5212, avg=4660.00, stdev=369.57, samples=4 00:24:34.035 lat (msec) : 4=0.40%, 10=80.32%, 20=16.32%, 50=2.97% 00:24:34.035 cpu : usr=86.19%, sys=10.82%, ctx=15, majf=0, minf=3 00:24:34.035 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:24:34.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:34.035 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:34.035 issued rwts: total=18221,9362,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:34.035 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:34.035 00:24:34.035 Run status group 0 (all jobs): 00:24:34.035 READ: bw=142MiB/s (149MB/s), 142MiB/s-142MiB/s (149MB/s-149MB/s), io=285MiB (299MB), run=2006-2006msec 00:24:34.035 WRITE: bw=82.4MiB/s (86.4MB/s), 82.4MiB/s-82.4MiB/s (86.4MB/s-86.4MB/s), io=146MiB (153MB), run=1775-1775msec 00:24:34.035 01:24:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:34.035 01:24:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:34.035 01:24:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:34.035 01:24:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:34.035 01:24:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:34.035 01:24:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:34.035 01:24:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:24:34.035 01:24:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:34.035 01:24:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:24:34.035 01:24:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:34.035 01:24:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:34.035 rmmod nvme_tcp 00:24:34.035 rmmod nvme_fabrics 00:24:34.035 rmmod nvme_keyring 00:24:34.035 01:24:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:34.035 01:24:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:24:34.035 01:24:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:24:34.035 01:24:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 990051 ']' 00:24:34.035 01:24:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 990051 00:24:34.035 01:24:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 990051 ']' 00:24:34.035 01:24:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 990051 00:24:34.035 01:24:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:24:34.035 01:24:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:34.035 01:24:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 990051 00:24:34.035 01:24:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:34.035 01:24:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:34.035 01:24:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 990051' 00:24:34.035 killing process with pid 990051 00:24:34.035 01:24:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 990051 00:24:34.035 01:24:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 990051 00:24:34.294 01:24:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:34.294 01:24:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:34.294 01:24:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:34.294 01:24:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:34.294 01:24:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:34.294 01:24:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:34.294 01:24:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:34.294 01:24:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.197 01:24:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:36.197 00:24:36.197 real 0m15.713s 00:24:36.197 user 0m46.781s 00:24:36.197 sys 0m6.244s 00:24:36.197 01:24:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:36.197 01:24:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.197 ************************************ 00:24:36.197 END TEST nvmf_fio_host 00:24:36.197 ************************************ 00:24:36.456 01:24:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:36.456 01:24:58 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:36.456 01:24:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:36.456 01:24:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:36.456 01:24:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:36.456 ************************************ 00:24:36.456 START TEST nvmf_failover 00:24:36.456 ************************************ 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:36.456 * Looking for test storage... 00:24:36.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:24:36.456 01:24:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:41.724 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:41.724 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:24:41.724 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:41.724 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:41.724 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:41.724 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:41.724 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:41.724 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:24:41.724 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:41.725 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:41.725 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:41.725 Found net devices under 0000:86:00.0: cvl_0_0 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:41.725 Found net devices under 0000:86:00.1: cvl_0_1 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:41.725 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:41.984 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:41.984 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:41.984 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:41.984 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:41.984 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:41.984 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:24:41.984 00:24:41.984 --- 10.0.0.2 ping statistics --- 00:24:41.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:41.984 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:24:41.984 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:41.984 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:41.984 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.395 ms 00:24:41.984 00:24:41.984 --- 10.0.0.1 ping statistics --- 00:24:41.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:41.984 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:24:41.984 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:41.984 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:24:41.984 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:41.984 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:41.984 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:41.984 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:41.984 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:41.984 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:41.984 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:41.984 01:25:04 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:41.984 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:41.984 01:25:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:41.984 01:25:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:41.984 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=994967 00:24:41.984 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 994967 00:24:41.984 01:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:41.984 01:25:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 994967 ']' 00:24:41.984 01:25:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:41.984 01:25:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:41.984 01:25:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:41.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:41.984 01:25:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:41.984 01:25:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:41.984 [2024-07-25 01:25:04.416109] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:24:41.984 [2024-07-25 01:25:04.416151] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:41.984 EAL: No free 2048 kB hugepages reported on node 1 00:24:41.984 [2024-07-25 01:25:04.473084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:42.243 [2024-07-25 01:25:04.552772] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:42.243 [2024-07-25 01:25:04.552808] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:42.243 [2024-07-25 01:25:04.552815] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:42.243 [2024-07-25 01:25:04.552821] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:42.243 [2024-07-25 01:25:04.552826] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:42.243 [2024-07-25 01:25:04.552922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:42.243 [2024-07-25 01:25:04.553005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:42.243 [2024-07-25 01:25:04.553007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:42.807 01:25:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:42.807 01:25:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:24:42.807 01:25:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:42.807 01:25:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:42.807 01:25:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:42.807 01:25:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:42.807 01:25:05 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:43.064 [2024-07-25 01:25:05.432962] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:43.064 01:25:05 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:43.322 Malloc0 00:24:43.322 01:25:05 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:43.580 01:25:05 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:43.580 01:25:06 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:43.837 [2024-07-25 01:25:06.193120] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:43.837 01:25:06 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:44.095 [2024-07-25 01:25:06.365626] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:44.095 01:25:06 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:44.095 [2024-07-25 01:25:06.534189] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:44.095 01:25:06 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:44.095 01:25:06 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=995233 00:24:44.095 01:25:06 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:44.095 01:25:06 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 995233 /var/tmp/bdevperf.sock 00:24:44.095 01:25:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 995233 ']' 00:24:44.095 01:25:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:44.095 01:25:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:44.095 01:25:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:44.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:44.095 01:25:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:44.095 01:25:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:45.027 01:25:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:45.027 01:25:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:24:45.027 01:25:07 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:45.285 NVMe0n1 00:24:45.285 01:25:07 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:45.542 00:24:45.542 01:25:08 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:45.542 01:25:08 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=995474 00:24:45.542 01:25:08 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:46.913 01:25:09 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:46.913 [2024-07-25 01:25:09.201116] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c090 is same with the state(5) to be set 00:24:46.913 [2024-07-25 01:25:09.201169] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c090 is same with the state(5) to be set 00:24:46.913 [2024-07-25 01:25:09.201176] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c090 is same with the state(5) to be set 00:24:46.913 [2024-07-25 01:25:09.201188] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c090 is same with the state(5) to be set 00:24:46.913 [2024-07-25 01:25:09.201195] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c090 is same with the state(5) to be set 00:24:46.913 [2024-07-25 01:25:09.201201] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c090 is same with the state(5) to be set 00:24:46.913 [2024-07-25 01:25:09.201206] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c090 is same with the state(5) to be set 00:24:46.913 [2024-07-25 01:25:09.201212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c090 is same with the state(5) to be set 00:24:46.913 [2024-07-25 01:25:09.201218] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c090 is same with the state(5) to be set 00:24:46.913 [2024-07-25 01:25:09.201224] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c090 is same with the state(5) to be set 00:24:46.913 [2024-07-25 01:25:09.201229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c090 is same with the state(5) to be set 00:24:46.913 [2024-07-25 01:25:09.201236] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c090 is same with the state(5) to be set 00:24:46.913 [2024-07-25 01:25:09.201241] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c090 is same with the state(5) to be set 00:24:46.913 [2024-07-25 01:25:09.201247] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c090 is same with the state(5) to be set 00:24:46.913 [2024-07-25 01:25:09.201252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c090 is same with the state(5) to be set 00:24:46.913 [2024-07-25 01:25:09.201258] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c090 is same with the state(5) to be set 00:24:46.913 [2024-07-25 01:25:09.201264] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c090 is same with the state(5) to be set 00:24:46.913 [2024-07-25 01:25:09.201270] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c090 is same with the state(5) to be set 00:24:46.913 [2024-07-25 01:25:09.201276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c090 is same with the state(5) to be set 00:24:46.913 [2024-07-25 01:25:09.201282] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c090 is same with the state(5) to be set 00:24:46.913 [2024-07-25 01:25:09.201287] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c090 is same with the state(5) to be set 00:24:46.913 [2024-07-25 01:25:09.201293] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c090 is same with the state(5) to be set 00:24:46.913 [2024-07-25 01:25:09.201299] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c090 is same with the state(5) to be set 00:24:46.913 [2024-07-25 01:25:09.201305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c090 is same with the state(5) to be set 00:24:46.913 [2024-07-25 01:25:09.201310] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c090 is same with the state(5) to be set 00:24:46.913 [2024-07-25 01:25:09.201316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c090 is same with the state(5) to be set 00:24:46.913 [2024-07-25 01:25:09.201321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c090 is same with the state(5) to be set 00:24:46.913 [2024-07-25 01:25:09.201327] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c090 is same with the state(5) to be set 00:24:46.913 [2024-07-25 01:25:09.201333] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c090 is same with the state(5) to be set 00:24:46.913 [2024-07-25 01:25:09.201340] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c090 is same with the state(5) to be set 00:24:46.913 [2024-07-25 01:25:09.201347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c090 is same with the state(5) to be set 00:24:46.913 [2024-07-25 01:25:09.201353] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c090 is same with the state(5) to be set 00:24:46.913 [2024-07-25 01:25:09.201359] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c090 is same with the state(5) to be set 00:24:46.913 [2024-07-25 01:25:09.201364] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c090 is same with the state(5) to be set 00:24:46.914 [2024-07-25 01:25:09.201370] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c090 is same with the state(5) to be set 00:24:46.914 [2024-07-25 01:25:09.201376] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c090 is same with the state(5) to be set 00:24:46.914 [2024-07-25 01:25:09.201382] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c090 is same with the state(5) to be set 00:24:46.914 [2024-07-25 01:25:09.201387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c090 is same with the state(5) to be set 00:24:46.914 [2024-07-25 01:25:09.201393] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217c090 is same with the state(5) to be set 00:24:46.914 01:25:09 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:50.194 01:25:12 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:50.194 00:24:50.194 01:25:12 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:50.455 [2024-07-25 01:25:12.794767] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d600 is same with the state(5) to be set 00:24:50.455 [2024-07-25 01:25:12.794810] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d600 is same with the state(5) to be set 00:24:50.455 [2024-07-25 01:25:12.794818] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d600 is same with the state(5) to be set 00:24:50.455 [2024-07-25 01:25:12.794824] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d600 is same with the state(5) to be set 00:24:50.455 [2024-07-25 01:25:12.794830] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d600 is same with the state(5) to be set 00:24:50.455 [2024-07-25 01:25:12.794836] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d600 is same with the state(5) to be set 00:24:50.455 [2024-07-25 01:25:12.794842] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d600 is same with the state(5) to be set 00:24:50.455 [2024-07-25 01:25:12.794848] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d600 is same with the state(5) to be set 00:24:50.455 [2024-07-25 01:25:12.794853] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d600 is same with the state(5) to be set 00:24:50.455 [2024-07-25 01:25:12.794859] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d600 is same with the state(5) to be set 00:24:50.455 [2024-07-25 01:25:12.794864] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d600 is same with the state(5) to be set 00:24:50.455 [2024-07-25 01:25:12.794870] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d600 is same with the state(5) to be set 00:24:50.455 [2024-07-25 01:25:12.794876] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d600 is same with the state(5) to be set 00:24:50.455 [2024-07-25 01:25:12.794882] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d600 is same with the state(5) to be set 00:24:50.455 [2024-07-25 01:25:12.794888] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d600 is same with the state(5) to be set 00:24:50.455 [2024-07-25 01:25:12.794899] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d600 is same with the state(5) to be set 00:24:50.455 [2024-07-25 01:25:12.794905] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d600 is same with the state(5) to be set 00:24:50.455 [2024-07-25 01:25:12.794911] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d600 is same with the state(5) to be set 00:24:50.455 [2024-07-25 01:25:12.794917] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d600 is same with the state(5) to be set 00:24:50.455 [2024-07-25 01:25:12.794923] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d600 is same with the state(5) to be set 00:24:50.455 [2024-07-25 01:25:12.794928] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d600 is same with the state(5) to be set 00:24:50.455 [2024-07-25 01:25:12.794934] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d600 is same with the state(5) to be set 00:24:50.455 [2024-07-25 01:25:12.794939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d600 is same with the state(5) to be set 00:24:50.455 [2024-07-25 01:25:12.794945] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d600 is same with the state(5) to be set 00:24:50.455 [2024-07-25 01:25:12.794950] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217d600 is same with the state(5) to be set 00:24:50.455 01:25:12 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:53.828 01:25:15 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:53.828 [2024-07-25 01:25:15.992244] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:53.828 01:25:16 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:54.770 01:25:17 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:54.770 [2024-07-25 01:25:17.191965] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192007] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192015] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192028] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192040] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192052] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192058] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192070] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192076] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192082] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192093] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192098] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192104] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192110] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192116] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192122] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192127] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192133] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192144] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192149] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192160] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192166] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192171] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192177] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192182] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192188] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192205] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192211] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192217] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192223] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192240] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192248] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192260] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192282] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192289] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192306] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 [2024-07-25 01:25:17.192312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217dce0 is same with the state(5) to be set 00:24:54.770 01:25:17 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 995474 00:25:01.336 0 00:25:01.336 01:25:23 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 995233 00:25:01.336 01:25:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 995233 ']' 00:25:01.336 01:25:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 995233 00:25:01.336 01:25:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:01.336 01:25:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:01.336 01:25:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 995233 00:25:01.336 01:25:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:01.336 01:25:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:01.336 01:25:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 995233' 00:25:01.336 killing process with pid 995233 00:25:01.336 01:25:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 995233 00:25:01.336 01:25:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 995233 00:25:01.336 01:25:23 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:01.336 [2024-07-25 01:25:06.593147] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:25:01.336 [2024-07-25 01:25:06.593199] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid995233 ] 00:25:01.336 EAL: No free 2048 kB hugepages reported on node 1 00:25:01.336 [2024-07-25 01:25:06.648471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.336 [2024-07-25 01:25:06.723880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.336 Running I/O for 15 seconds... 00:25:01.336 [2024-07-25 01:25:09.201902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:103720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.336 [2024-07-25 01:25:09.201936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.336 [2024-07-25 01:25:09.201953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:103728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.336 [2024-07-25 01:25:09.201960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.336 [2024-07-25 01:25:09.201969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:103736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.336 [2024-07-25 01:25:09.201977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.336 [2024-07-25 01:25:09.201985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:103744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.336 [2024-07-25 01:25:09.201992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.336 [2024-07-25 01:25:09.202000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:103752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.336 [2024-07-25 01:25:09.202007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.336 [2024-07-25 01:25:09.202015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:103760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.336 [2024-07-25 01:25:09.202022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.336 [2024-07-25 01:25:09.202030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:103768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.336 [2024-07-25 01:25:09.202037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.336 [2024-07-25 01:25:09.202049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:103776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.336 [2024-07-25 01:25:09.202056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.336 [2024-07-25 01:25:09.202064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:103784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.336 [2024-07-25 01:25:09.202071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.336 [2024-07-25 01:25:09.202079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:103792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.336 [2024-07-25 01:25:09.202086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.336 [2024-07-25 01:25:09.202094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:103800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.336 [2024-07-25 01:25:09.202100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.336 [2024-07-25 01:25:09.202113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:103808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.336 [2024-07-25 01:25:09.202120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.336 [2024-07-25 01:25:09.202128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:103816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.336 [2024-07-25 01:25:09.202136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.336 [2024-07-25 01:25:09.202144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:103824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.336 [2024-07-25 01:25:09.202150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.336 [2024-07-25 01:25:09.202158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:103832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.336 [2024-07-25 01:25:09.202165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.336 [2024-07-25 01:25:09.202173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:103840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.336 [2024-07-25 01:25:09.202179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.336 [2024-07-25 01:25:09.202187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:103848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.336 [2024-07-25 01:25:09.202194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.336 [2024-07-25 01:25:09.202203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:103856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.336 [2024-07-25 01:25:09.202209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.336 [2024-07-25 01:25:09.202217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:103864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.336 [2024-07-25 01:25:09.202224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.336 [2024-07-25 01:25:09.202232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:103872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.336 [2024-07-25 01:25:09.202239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.336 [2024-07-25 01:25:09.202247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:103880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.336 [2024-07-25 01:25:09.202254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.336 [2024-07-25 01:25:09.202262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:103888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.336 [2024-07-25 01:25:09.202268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.336 [2024-07-25 01:25:09.202276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:103896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.336 [2024-07-25 01:25:09.202283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.336 [2024-07-25 01:25:09.202291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:104288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.336 [2024-07-25 01:25:09.202306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.336 [2024-07-25 01:25:09.202314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:104296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.336 [2024-07-25 01:25:09.202321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.336 [2024-07-25 01:25:09.202329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:104304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.336 [2024-07-25 01:25:09.202335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.336 [2024-07-25 01:25:09.202344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:104312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.336 [2024-07-25 01:25:09.202350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.336 [2024-07-25 01:25:09.202358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:104320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.336 [2024-07-25 01:25:09.202365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.336 [2024-07-25 01:25:09.202372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:104328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.336 [2024-07-25 01:25:09.202379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.337 [2024-07-25 01:25:09.202387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.337 [2024-07-25 01:25:09.202393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.337 [2024-07-25 01:25:09.202401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:104344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.337 [2024-07-25 01:25:09.202407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.337 [2024-07-25 01:25:09.202415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:104352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.337 [2024-07-25 01:25:09.202423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.337 [2024-07-25 01:25:09.202431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:104360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.337 [2024-07-25 01:25:09.202438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.337 [2024-07-25 01:25:09.202446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:104368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.337 [2024-07-25 01:25:09.202452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.337 [2024-07-25 01:25:09.202460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:104376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.337 [2024-07-25 01:25:09.202467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.337 [2024-07-25 01:25:09.202474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:104384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.337 [2024-07-25 01:25:09.202481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.337 [2024-07-25 01:25:09.202490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:104392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.337 [2024-07-25 01:25:09.202497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.337 [2024-07-25 01:25:09.202505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:104400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.337 [2024-07-25 01:25:09.202512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.337 [2024-07-25 01:25:09.202520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:104408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.337 [2024-07-25 01:25:09.202526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.337 [2024-07-25 01:25:09.202534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:103904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.337 [2024-07-25 01:25:09.202541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.337 [2024-07-25 01:25:09.202549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:103912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.337 [2024-07-25 01:25:09.202555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.337 [2024-07-25 01:25:09.202563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:103920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.337 [2024-07-25 01:25:09.202569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.337 [2024-07-25 01:25:09.202578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:103928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.337 [2024-07-25 01:25:09.202584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.337 [2024-07-25 01:25:09.202592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:103936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.337 [2024-07-25 01:25:09.202598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.337 [2024-07-25 01:25:09.202607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:103944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.337 [2024-07-25 01:25:09.202613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.337 [2024-07-25 01:25:09.202621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:103952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.337 [2024-07-25 01:25:09.202628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.337 [2024-07-25 01:25:09.202636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:103960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.337 [2024-07-25 01:25:09.202642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.337 [2024-07-25 01:25:09.202650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.337 [2024-07-25 01:25:09.202657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.337 [2024-07-25 01:25:09.202664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:103976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.337 [2024-07-25 01:25:09.202674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.337 [2024-07-25 01:25:09.202682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:103984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.337 [2024-07-25 01:25:09.202689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.337 [2024-07-25 01:25:09.202697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:103992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.337 [2024-07-25 01:25:09.202704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.337 [2024-07-25 01:25:09.202712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:104000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.337 [2024-07-25 01:25:09.202718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.337 [2024-07-25 01:25:09.202726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:104008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.337 [2024-07-25 01:25:09.202733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.337 [2024-07-25 01:25:09.202741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.337 [2024-07-25 01:25:09.202747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.337 [2024-07-25 01:25:09.202755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:104024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.337 [2024-07-25 01:25:09.202761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.337 [2024-07-25 01:25:09.202769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:104032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.337 [2024-07-25 01:25:09.202776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.337 [2024-07-25 01:25:09.202783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:104040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.337 [2024-07-25 01:25:09.202790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.337 [2024-07-25 01:25:09.202798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:104048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.337 [2024-07-25 01:25:09.202804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.337 [2024-07-25 01:25:09.202812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:104056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.337 [2024-07-25 01:25:09.202819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.337 [2024-07-25 01:25:09.202827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:104064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.337 [2024-07-25 01:25:09.202833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.337 [2024-07-25 01:25:09.202841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:104072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.337 [2024-07-25 01:25:09.202847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.337 [2024-07-25 01:25:09.202856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:104080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.337 [2024-07-25 01:25:09.202863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.337 [2024-07-25 01:25:09.202871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:104088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.337 [2024-07-25 01:25:09.202877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.337 [2024-07-25 01:25:09.202885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:104096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.337 [2024-07-25 01:25:09.202892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.337 [2024-07-25 01:25:09.202900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:104104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.337 [2024-07-25 01:25:09.202907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.337 [2024-07-25 01:25:09.202915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:104112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.337 [2024-07-25 01:25:09.202922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.337 [2024-07-25 01:25:09.202930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:104120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.338 [2024-07-25 01:25:09.202937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.338 [2024-07-25 01:25:09.202945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:104128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.338 [2024-07-25 01:25:09.202951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.338 [2024-07-25 01:25:09.202959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:104136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.338 [2024-07-25 01:25:09.202966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.338 [2024-07-25 01:25:09.202974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:104144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.338 [2024-07-25 01:25:09.202980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.338 [2024-07-25 01:25:09.202988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.338 [2024-07-25 01:25:09.202994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.338 [2024-07-25 01:25:09.203002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:104416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.338 [2024-07-25 01:25:09.203009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.338 [2024-07-25 01:25:09.203017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:104424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.338 [2024-07-25 01:25:09.203023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.338 [2024-07-25 01:25:09.203031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:104432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.338 [2024-07-25 01:25:09.203039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.338 [2024-07-25 01:25:09.203051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:104440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.338 [2024-07-25 01:25:09.203057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.338 [2024-07-25 01:25:09.203066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:104448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.338 [2024-07-25 01:25:09.203072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.338 [2024-07-25 01:25:09.203080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:104456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.338 [2024-07-25 01:25:09.203087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.338 [2024-07-25 01:25:09.203095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:104464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.338 [2024-07-25 01:25:09.203101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.338 [2024-07-25 01:25:09.203109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:104472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.338 [2024-07-25 01:25:09.203116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.338 [2024-07-25 01:25:09.203123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:104480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.338 [2024-07-25 01:25:09.203130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.338 [2024-07-25 01:25:09.203137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:104488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.338 [2024-07-25 01:25:09.203144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.338 [2024-07-25 01:25:09.203152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:104496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.338 [2024-07-25 01:25:09.203159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.338 [2024-07-25 01:25:09.203167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:104504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.338 [2024-07-25 01:25:09.203173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.338 [2024-07-25 01:25:09.203181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:104512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.338 [2024-07-25 01:25:09.203188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.338 [2024-07-25 01:25:09.203196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:104520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.338 [2024-07-25 01:25:09.203203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.338 [2024-07-25 01:25:09.203211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:104528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.338 [2024-07-25 01:25:09.203217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.338 [2024-07-25 01:25:09.203225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:104536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.338 [2024-07-25 01:25:09.203233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.338 [2024-07-25 01:25:09.203240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:104160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.338 [2024-07-25 01:25:09.203247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.338 [2024-07-25 01:25:09.203255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:104168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.338 [2024-07-25 01:25:09.203262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.338 [2024-07-25 01:25:09.203270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:104176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.338 [2024-07-25 01:25:09.203276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.338 [2024-07-25 01:25:09.203284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:104184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.338 [2024-07-25 01:25:09.203290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.338 [2024-07-25 01:25:09.203298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:104192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.338 [2024-07-25 01:25:09.203305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.338 [2024-07-25 01:25:09.203313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.338 [2024-07-25 01:25:09.203319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.338 [2024-07-25 01:25:09.203330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:104208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.338 [2024-07-25 01:25:09.203336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.338 [2024-07-25 01:25:09.203344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:104216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.338 [2024-07-25 01:25:09.203351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.338 [2024-07-25 01:25:09.203358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:104544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.338 [2024-07-25 01:25:09.203365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.338 [2024-07-25 01:25:09.203373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:104552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.338 [2024-07-25 01:25:09.203380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.338 [2024-07-25 01:25:09.203388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:104560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.338 [2024-07-25 01:25:09.203395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.338 [2024-07-25 01:25:09.203403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:104568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.338 [2024-07-25 01:25:09.203409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.338 [2024-07-25 01:25:09.203421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:104576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.338 [2024-07-25 01:25:09.203428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.338 [2024-07-25 01:25:09.203435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:104584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.338 [2024-07-25 01:25:09.203442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.338 [2024-07-25 01:25:09.203450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:104592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.338 [2024-07-25 01:25:09.203457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.338 [2024-07-25 01:25:09.203465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.338 [2024-07-25 01:25:09.203471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.338 [2024-07-25 01:25:09.203479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:104224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.338 [2024-07-25 01:25:09.203486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.338 [2024-07-25 01:25:09.203494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:104232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.338 [2024-07-25 01:25:09.203500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.338 [2024-07-25 01:25:09.203509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:104240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.338 [2024-07-25 01:25:09.203515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.339 [2024-07-25 01:25:09.203523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:104248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.339 [2024-07-25 01:25:09.203529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.339 [2024-07-25 01:25:09.203537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:104256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.339 [2024-07-25 01:25:09.203543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.339 [2024-07-25 01:25:09.203551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:104264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.339 [2024-07-25 01:25:09.203558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.339 [2024-07-25 01:25:09.203567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:104272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.339 [2024-07-25 01:25:09.203573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.339 [2024-07-25 01:25:09.203581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:104280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.339 [2024-07-25 01:25:09.203588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.339 [2024-07-25 01:25:09.203596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:104608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.339 [2024-07-25 01:25:09.203604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.339 [2024-07-25 01:25:09.203611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:104616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.339 [2024-07-25 01:25:09.203618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.339 [2024-07-25 01:25:09.203626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:104624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.339 [2024-07-25 01:25:09.203632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.339 [2024-07-25 01:25:09.203641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:104632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.339 [2024-07-25 01:25:09.203647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.339 [2024-07-25 01:25:09.203655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:104640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.339 [2024-07-25 01:25:09.203662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.339 [2024-07-25 01:25:09.203669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:104648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.339 [2024-07-25 01:25:09.203676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.339 [2024-07-25 01:25:09.203683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:104656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.339 [2024-07-25 01:25:09.203690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.339 [2024-07-25 01:25:09.203697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:104664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.339 [2024-07-25 01:25:09.203704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.339 [2024-07-25 01:25:09.203711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:104672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.339 [2024-07-25 01:25:09.203718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.339 [2024-07-25 01:25:09.203725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:104680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.339 [2024-07-25 01:25:09.203732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.339 [2024-07-25 01:25:09.203740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.339 [2024-07-25 01:25:09.203746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.339 [2024-07-25 01:25:09.203754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:104696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.339 [2024-07-25 01:25:09.203760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.339 [2024-07-25 01:25:09.203768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:104704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.339 [2024-07-25 01:25:09.203774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.339 [2024-07-25 01:25:09.203783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:104712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.339 [2024-07-25 01:25:09.203789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.339 [2024-07-25 01:25:09.203799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:104720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.339 [2024-07-25 01:25:09.203805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.339 [2024-07-25 01:25:09.203813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:104728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.339 [2024-07-25 01:25:09.203819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.339 [2024-07-25 01:25:09.203838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.339 [2024-07-25 01:25:09.203844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.339 [2024-07-25 01:25:09.203850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104736 len:8 PRP1 0x0 PRP2 0x0 00:25:01.339 [2024-07-25 01:25:09.203859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.339 [2024-07-25 01:25:09.203899] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x615730 was disconnected and freed. reset controller. 00:25:01.339 [2024-07-25 01:25:09.203908] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:01.339 [2024-07-25 01:25:09.203927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.339 [2024-07-25 01:25:09.203934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.339 [2024-07-25 01:25:09.203942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.339 [2024-07-25 01:25:09.203948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.339 [2024-07-25 01:25:09.203955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.339 [2024-07-25 01:25:09.203962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.339 [2024-07-25 01:25:09.203969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.339 [2024-07-25 01:25:09.203976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.339 [2024-07-25 01:25:09.203983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.339 [2024-07-25 01:25:09.204010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f5540 (9): Bad file descriptor 00:25:01.339 [2024-07-25 01:25:09.206863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.339 [2024-07-25 01:25:09.286390] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:01.339 [2024-07-25 01:25:12.795662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:48912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.339 [2024-07-25 01:25:12.795695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.339 [2024-07-25 01:25:12.795709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:48920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.339 [2024-07-25 01:25:12.795720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.339 [2024-07-25 01:25:12.795729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:48928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.339 [2024-07-25 01:25:12.795735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.339 [2024-07-25 01:25:12.795744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:48936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.339 [2024-07-25 01:25:12.795750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.339 [2024-07-25 01:25:12.795758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:48944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.339 [2024-07-25 01:25:12.795764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.339 [2024-07-25 01:25:12.795772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:48952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.339 [2024-07-25 01:25:12.795779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.339 [2024-07-25 01:25:12.795787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:48960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.339 [2024-07-25 01:25:12.795794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.339 [2024-07-25 01:25:12.795802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:48968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.339 [2024-07-25 01:25:12.795808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.339 [2024-07-25 01:25:12.795816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:48976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.339 [2024-07-25 01:25:12.795822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.339 [2024-07-25 01:25:12.795830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:48984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.339 [2024-07-25 01:25:12.795836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.340 [2024-07-25 01:25:12.795844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:48992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.340 [2024-07-25 01:25:12.795850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.340 [2024-07-25 01:25:12.795858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:49000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.340 [2024-07-25 01:25:12.795864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.340 [2024-07-25 01:25:12.795872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:49008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.340 [2024-07-25 01:25:12.795878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.340 [2024-07-25 01:25:12.795886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:49016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.340 [2024-07-25 01:25:12.795895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.340 [2024-07-25 01:25:12.795905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:49024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.340 [2024-07-25 01:25:12.795911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.340 [2024-07-25 01:25:12.795919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:49032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.340 [2024-07-25 01:25:12.795925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.340 [2024-07-25 01:25:12.795934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:49040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.340 [2024-07-25 01:25:12.795941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.340 [2024-07-25 01:25:12.795949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:49048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.340 [2024-07-25 01:25:12.795956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.340 [2024-07-25 01:25:12.795964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:49056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.340 [2024-07-25 01:25:12.795970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.340 [2024-07-25 01:25:12.795978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:49064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.340 [2024-07-25 01:25:12.795984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.340 [2024-07-25 01:25:12.795992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:49072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.340 [2024-07-25 01:25:12.795998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.340 [2024-07-25 01:25:12.796006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:49080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.340 [2024-07-25 01:25:12.796012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.340 [2024-07-25 01:25:12.796020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:49088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.340 [2024-07-25 01:25:12.796026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.340 [2024-07-25 01:25:12.796034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:49096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.340 [2024-07-25 01:25:12.796040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.340 [2024-07-25 01:25:12.796054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:49104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.340 [2024-07-25 01:25:12.796061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.340 [2024-07-25 01:25:12.796069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.340 [2024-07-25 01:25:12.796076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.340 [2024-07-25 01:25:12.796084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:49120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.340 [2024-07-25 01:25:12.796092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.340 [2024-07-25 01:25:12.796100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:49128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.340 [2024-07-25 01:25:12.796106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.340 [2024-07-25 01:25:12.796115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:49192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.340 [2024-07-25 01:25:12.796121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.340 [2024-07-25 01:25:12.796130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:49200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.340 [2024-07-25 01:25:12.796136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.340 [2024-07-25 01:25:12.796144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:49208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.340 [2024-07-25 01:25:12.796151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.340 [2024-07-25 01:25:12.796158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.340 [2024-07-25 01:25:12.796165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.340 [2024-07-25 01:25:12.796173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.340 [2024-07-25 01:25:12.796180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.340 [2024-07-25 01:25:12.796188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.340 [2024-07-25 01:25:12.796194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.340 [2024-07-25 01:25:12.796202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:49240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.340 [2024-07-25 01:25:12.796209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.340 [2024-07-25 01:25:12.796217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:49248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.340 [2024-07-25 01:25:12.796223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.340 [2024-07-25 01:25:12.796231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:49136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.340 [2024-07-25 01:25:12.796238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.340 [2024-07-25 01:25:12.796245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:49144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.340 [2024-07-25 01:25:12.796252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.340 [2024-07-25 01:25:12.796259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:49152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.340 [2024-07-25 01:25:12.796265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.341 [2024-07-25 01:25:12.796273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.341 [2024-07-25 01:25:12.796281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.341 [2024-07-25 01:25:12.796289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:49168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.341 [2024-07-25 01:25:12.796295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.341 [2024-07-25 01:25:12.796303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:49176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.341 [2024-07-25 01:25:12.796310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.341 [2024-07-25 01:25:12.796317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:49256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.341 [2024-07-25 01:25:12.796323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.341 [2024-07-25 01:25:12.796331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:49264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.341 [2024-07-25 01:25:12.796338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.341 [2024-07-25 01:25:12.796346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.341 [2024-07-25 01:25:12.796352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.341 [2024-07-25 01:25:12.796360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:49280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.341 [2024-07-25 01:25:12.796366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.341 [2024-07-25 01:25:12.796374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:49288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.341 [2024-07-25 01:25:12.796380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.341 [2024-07-25 01:25:12.796388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.341 [2024-07-25 01:25:12.796395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.341 [2024-07-25 01:25:12.796403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:49304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.341 [2024-07-25 01:25:12.796410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.341 [2024-07-25 01:25:12.796417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:49312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.341 [2024-07-25 01:25:12.796424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.341 [2024-07-25 01:25:12.796431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:49320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.341 [2024-07-25 01:25:12.796438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.341 [2024-07-25 01:25:12.796446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.341 [2024-07-25 01:25:12.796452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.341 [2024-07-25 01:25:12.796461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:49336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.341 [2024-07-25 01:25:12.796468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.341 [2024-07-25 01:25:12.796476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:49344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.341 [2024-07-25 01:25:12.796482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.341 [2024-07-25 01:25:12.796490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:49352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.341 [2024-07-25 01:25:12.796496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.341 [2024-07-25 01:25:12.796504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:49360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.341 [2024-07-25 01:25:12.796511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.341 [2024-07-25 01:25:12.796518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.341 [2024-07-25 01:25:12.796525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.341 [2024-07-25 01:25:12.796533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:49376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.341 [2024-07-25 01:25:12.796539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.341 [2024-07-25 01:25:12.796547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:49384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.341 [2024-07-25 01:25:12.796553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.341 [2024-07-25 01:25:12.796561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:49392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.341 [2024-07-25 01:25:12.796567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.341 [2024-07-25 01:25:12.796575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:49400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.341 [2024-07-25 01:25:12.796581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.341 [2024-07-25 01:25:12.796589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:49408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.341 [2024-07-25 01:25:12.796595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.341 [2024-07-25 01:25:12.796603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:49416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.341 [2024-07-25 01:25:12.796610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.341 [2024-07-25 01:25:12.796618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:49424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.341 [2024-07-25 01:25:12.796625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.341 [2024-07-25 01:25:12.796633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.341 [2024-07-25 01:25:12.796641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.341 [2024-07-25 01:25:12.796650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:49440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.341 [2024-07-25 01:25:12.796657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.341 [2024-07-25 01:25:12.796665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:49448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.341 [2024-07-25 01:25:12.796671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.341 [2024-07-25 01:25:12.796679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:49456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.341 [2024-07-25 01:25:12.796686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.341 [2024-07-25 01:25:12.796693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:49464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.341 [2024-07-25 01:25:12.796700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.341 [2024-07-25 01:25:12.796708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:49472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.341 [2024-07-25 01:25:12.796714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.341 [2024-07-25 01:25:12.796722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:49480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.341 [2024-07-25 01:25:12.796728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.341 [2024-07-25 01:25:12.796736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.341 [2024-07-25 01:25:12.796742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.341 [2024-07-25 01:25:12.796750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:49496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.341 [2024-07-25 01:25:12.796757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.341 [2024-07-25 01:25:12.796765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:49504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.341 [2024-07-25 01:25:12.796771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.341 [2024-07-25 01:25:12.796779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:49512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.341 [2024-07-25 01:25:12.796785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.341 [2024-07-25 01:25:12.796792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:49520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.341 [2024-07-25 01:25:12.796798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.341 [2024-07-25 01:25:12.796807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.341 [2024-07-25 01:25:12.796814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.341 [2024-07-25 01:25:12.796822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.342 [2024-07-25 01:25:12.796830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.342 [2024-07-25 01:25:12.796839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:49544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.342 [2024-07-25 01:25:12.796845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.342 [2024-07-25 01:25:12.796854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:49552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.342 [2024-07-25 01:25:12.796860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.342 [2024-07-25 01:25:12.796868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:49560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.342 [2024-07-25 01:25:12.796875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.342 [2024-07-25 01:25:12.796882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.342 [2024-07-25 01:25:12.796889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.342 [2024-07-25 01:25:12.796896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:49576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.342 [2024-07-25 01:25:12.796903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.342 [2024-07-25 01:25:12.796911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:49584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.342 [2024-07-25 01:25:12.796917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.342 [2024-07-25 01:25:12.796925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:49592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.342 [2024-07-25 01:25:12.796931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.342 [2024-07-25 01:25:12.796939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:49600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.342 [2024-07-25 01:25:12.796946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.342 [2024-07-25 01:25:12.796954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:49608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.342 [2024-07-25 01:25:12.796960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.342 [2024-07-25 01:25:12.796968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:49616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.342 [2024-07-25 01:25:12.796974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.342 [2024-07-25 01:25:12.796982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:49624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.342 [2024-07-25 01:25:12.796988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.342 [2024-07-25 01:25:12.796996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:49632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.342 [2024-07-25 01:25:12.797002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.342 [2024-07-25 01:25:12.797012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:49640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.342 [2024-07-25 01:25:12.797018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.342 [2024-07-25 01:25:12.797026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:49648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.342 [2024-07-25 01:25:12.797032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.342 [2024-07-25 01:25:12.797040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:49656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.342 [2024-07-25 01:25:12.797051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.342 [2024-07-25 01:25:12.797059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:49664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.342 [2024-07-25 01:25:12.797066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.342 [2024-07-25 01:25:12.797079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:49672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.342 [2024-07-25 01:25:12.797085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.342 [2024-07-25 01:25:12.797093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:49680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.342 [2024-07-25 01:25:12.797099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.342 [2024-07-25 01:25:12.797107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.342 [2024-07-25 01:25:12.797114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.342 [2024-07-25 01:25:12.797122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:49696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.342 [2024-07-25 01:25:12.797129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.342 [2024-07-25 01:25:12.797136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:49704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.342 [2024-07-25 01:25:12.797143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.342 [2024-07-25 01:25:12.797151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.342 [2024-07-25 01:25:12.797157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.342 [2024-07-25 01:25:12.797165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:49720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.342 [2024-07-25 01:25:12.797171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.342 [2024-07-25 01:25:12.797179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:49728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.342 [2024-07-25 01:25:12.797186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.342 [2024-07-25 01:25:12.797193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:49736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.342 [2024-07-25 01:25:12.797201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.342 [2024-07-25 01:25:12.797209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:49744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.342 [2024-07-25 01:25:12.797215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.342 [2024-07-25 01:25:12.797223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:49752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.342 [2024-07-25 01:25:12.797229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.342 [2024-07-25 01:25:12.797237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.342 [2024-07-25 01:25:12.797243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.342 [2024-07-25 01:25:12.797251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.342 [2024-07-25 01:25:12.797258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.342 [2024-07-25 01:25:12.797265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:49776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.342 [2024-07-25 01:25:12.797272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.342 [2024-07-25 01:25:12.797279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:49784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.342 [2024-07-25 01:25:12.797286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.342 [2024-07-25 01:25:12.797293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:49792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.342 [2024-07-25 01:25:12.797300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.342 [2024-07-25 01:25:12.797309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:49800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.342 [2024-07-25 01:25:12.797316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.342 [2024-07-25 01:25:12.797324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:49808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.342 [2024-07-25 01:25:12.797330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.342 [2024-07-25 01:25:12.797338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:49816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.342 [2024-07-25 01:25:12.797344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.342 [2024-07-25 01:25:12.797352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:49824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.342 [2024-07-25 01:25:12.797358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.342 [2024-07-25 01:25:12.797366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:49832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.342 [2024-07-25 01:25:12.797372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.342 [2024-07-25 01:25:12.797384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:49840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.342 [2024-07-25 01:25:12.797391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.342 [2024-07-25 01:25:12.797399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:49848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.342 [2024-07-25 01:25:12.797405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.343 [2024-07-25 01:25:12.797413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:49856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.343 [2024-07-25 01:25:12.797419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.343 [2024-07-25 01:25:12.797427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.343 [2024-07-25 01:25:12.797433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.343 [2024-07-25 01:25:12.797441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.343 [2024-07-25 01:25:12.797447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.343 [2024-07-25 01:25:12.797454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:49880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.343 [2024-07-25 01:25:12.797461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.343 [2024-07-25 01:25:12.797469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.343 [2024-07-25 01:25:12.797475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.343 [2024-07-25 01:25:12.797496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.343 [2024-07-25 01:25:12.797503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49896 len:8 PRP1 0x0 PRP2 0x0 00:25:01.343 [2024-07-25 01:25:12.797509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.343 [2024-07-25 01:25:12.797518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.343 [2024-07-25 01:25:12.797523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.343 [2024-07-25 01:25:12.797529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49904 len:8 PRP1 0x0 PRP2 0x0 00:25:01.343 [2024-07-25 01:25:12.797535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.343 [2024-07-25 01:25:12.797542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.343 [2024-07-25 01:25:12.797547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.343 [2024-07-25 01:25:12.797553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49912 len:8 PRP1 0x0 PRP2 0x0 00:25:01.343 [2024-07-25 01:25:12.797559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.343 [2024-07-25 01:25:12.797566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.343 [2024-07-25 01:25:12.797570] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.343 [2024-07-25 01:25:12.797576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49920 len:8 PRP1 0x0 PRP2 0x0 00:25:01.343 [2024-07-25 01:25:12.797584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.343 [2024-07-25 01:25:12.797590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.343 [2024-07-25 01:25:12.797595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.343 [2024-07-25 01:25:12.797600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49928 len:8 PRP1 0x0 PRP2 0x0 00:25:01.343 [2024-07-25 01:25:12.797606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.343 [2024-07-25 01:25:12.797613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.343 [2024-07-25 01:25:12.797618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.343 [2024-07-25 01:25:12.797623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49184 len:8 PRP1 0x0 PRP2 0x0 00:25:01.343 [2024-07-25 01:25:12.797629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.343 [2024-07-25 01:25:12.797669] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x7c03c0 was disconnected and freed. reset controller. 00:25:01.343 [2024-07-25 01:25:12.797677] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:01.343 [2024-07-25 01:25:12.797695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.343 [2024-07-25 01:25:12.797703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.343 [2024-07-25 01:25:12.797710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.343 [2024-07-25 01:25:12.797717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.343 [2024-07-25 01:25:12.797724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.343 [2024-07-25 01:25:12.797730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.343 [2024-07-25 01:25:12.797737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.343 [2024-07-25 01:25:12.797743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.343 [2024-07-25 01:25:12.797750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.343 [2024-07-25 01:25:12.800590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.343 [2024-07-25 01:25:12.800620] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f5540 (9): Bad file descriptor 00:25:01.343 [2024-07-25 01:25:12.866613] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:01.343 [2024-07-25 01:25:17.193263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:76200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.343 [2024-07-25 01:25:17.193296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.343 [2024-07-25 01:25:17.193311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:76208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.343 [2024-07-25 01:25:17.193319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.343 [2024-07-25 01:25:17.193328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.343 [2024-07-25 01:25:17.193339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.343 [2024-07-25 01:25:17.193348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:76224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.343 [2024-07-25 01:25:17.193354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.343 [2024-07-25 01:25:17.193363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:76232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.343 [2024-07-25 01:25:17.193370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.343 [2024-07-25 01:25:17.193378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:76240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.343 [2024-07-25 01:25:17.193384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.343 [2024-07-25 01:25:17.193393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.343 [2024-07-25 01:25:17.193399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.343 [2024-07-25 01:25:17.193407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:76256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.343 [2024-07-25 01:25:17.193413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.343 [2024-07-25 01:25:17.193422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:76264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.343 [2024-07-25 01:25:17.193428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.343 [2024-07-25 01:25:17.193436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:76272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.343 [2024-07-25 01:25:17.193442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.343 [2024-07-25 01:25:17.193455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:76280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.343 [2024-07-25 01:25:17.193462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.343 [2024-07-25 01:25:17.193470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:76288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.343 [2024-07-25 01:25:17.193477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.343 [2024-07-25 01:25:17.193485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:76296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.343 [2024-07-25 01:25:17.193492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.343 [2024-07-25 01:25:17.193500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:76304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.343 [2024-07-25 01:25:17.193506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.343 [2024-07-25 01:25:17.193513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:76312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.343 [2024-07-25 01:25:17.193520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.343 [2024-07-25 01:25:17.193529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:76320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.343 [2024-07-25 01:25:17.193536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.343 [2024-07-25 01:25:17.193544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:76328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.343 [2024-07-25 01:25:17.193552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.343 [2024-07-25 01:25:17.193560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:76336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.344 [2024-07-25 01:25:17.193566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.344 [2024-07-25 01:25:17.193574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:76344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.344 [2024-07-25 01:25:17.193580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.344 [2024-07-25 01:25:17.193589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.344 [2024-07-25 01:25:17.193595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.344 [2024-07-25 01:25:17.193603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.344 [2024-07-25 01:25:17.193609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.344 [2024-07-25 01:25:17.193617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.344 [2024-07-25 01:25:17.193623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.344 [2024-07-25 01:25:17.193632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.344 [2024-07-25 01:25:17.193638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.344 [2024-07-25 01:25:17.193646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.344 [2024-07-25 01:25:17.193652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.344 [2024-07-25 01:25:17.193660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.344 [2024-07-25 01:25:17.193666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.344 [2024-07-25 01:25:17.193674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.344 [2024-07-25 01:25:17.193681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.344 [2024-07-25 01:25:17.193689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.344 [2024-07-25 01:25:17.193696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.344 [2024-07-25 01:25:17.193703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.344 [2024-07-25 01:25:17.193711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.344 [2024-07-25 01:25:17.193719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.344 [2024-07-25 01:25:17.193725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.344 [2024-07-25 01:25:17.193734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.344 [2024-07-25 01:25:17.193740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.344 [2024-07-25 01:25:17.193749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.344 [2024-07-25 01:25:17.193756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.344 [2024-07-25 01:25:17.193763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.344 [2024-07-25 01:25:17.193770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.344 [2024-07-25 01:25:17.193778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.344 [2024-07-25 01:25:17.193784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.344 [2024-07-25 01:25:17.193792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.344 [2024-07-25 01:25:17.193798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.344 [2024-07-25 01:25:17.193806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.344 [2024-07-25 01:25:17.193813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.344 [2024-07-25 01:25:17.193820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.344 [2024-07-25 01:25:17.193827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.344 [2024-07-25 01:25:17.193835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.344 [2024-07-25 01:25:17.193842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.344 [2024-07-25 01:25:17.193849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.344 [2024-07-25 01:25:17.193856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.344 [2024-07-25 01:25:17.193864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.344 [2024-07-25 01:25:17.193870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.344 [2024-07-25 01:25:17.193878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.344 [2024-07-25 01:25:17.193884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.344 [2024-07-25 01:25:17.193892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.344 [2024-07-25 01:25:17.193900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.344 [2024-07-25 01:25:17.193908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.344 [2024-07-25 01:25:17.193915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.344 [2024-07-25 01:25:17.193923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.344 [2024-07-25 01:25:17.193929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.344 [2024-07-25 01:25:17.193937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:76352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.344 [2024-07-25 01:25:17.193943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.344 [2024-07-25 01:25:17.193951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.344 [2024-07-25 01:25:17.193957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.344 [2024-07-25 01:25:17.193965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.344 [2024-07-25 01:25:17.193971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.344 [2024-07-25 01:25:17.193979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.344 [2024-07-25 01:25:17.193986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.344 [2024-07-25 01:25:17.193994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.344 [2024-07-25 01:25:17.194000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.344 [2024-07-25 01:25:17.194008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.344 [2024-07-25 01:25:17.194014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.344 [2024-07-25 01:25:17.194022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:76776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.344 [2024-07-25 01:25:17.194028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.344 [2024-07-25 01:25:17.194036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.344 [2024-07-25 01:25:17.194049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.344 [2024-07-25 01:25:17.194057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.344 [2024-07-25 01:25:17.194063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.344 [2024-07-25 01:25:17.194071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.344 [2024-07-25 01:25:17.194078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.344 [2024-07-25 01:25:17.194088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:76808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.344 [2024-07-25 01:25:17.194094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.344 [2024-07-25 01:25:17.194102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.344 [2024-07-25 01:25:17.194109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.344 [2024-07-25 01:25:17.194117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.344 [2024-07-25 01:25:17.194123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.344 [2024-07-25 01:25:17.194131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.344 [2024-07-25 01:25:17.194137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.345 [2024-07-25 01:25:17.194146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.345 [2024-07-25 01:25:17.194153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.345 [2024-07-25 01:25:17.194161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.345 [2024-07-25 01:25:17.194167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.345 [2024-07-25 01:25:17.194175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.345 [2024-07-25 01:25:17.194181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.345 [2024-07-25 01:25:17.194189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.345 [2024-07-25 01:25:17.194196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.345 [2024-07-25 01:25:17.194204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.345 [2024-07-25 01:25:17.194211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.345 [2024-07-25 01:25:17.194219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.345 [2024-07-25 01:25:17.194225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.345 [2024-07-25 01:25:17.194233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.345 [2024-07-25 01:25:17.194240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.345 [2024-07-25 01:25:17.194248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.345 [2024-07-25 01:25:17.194254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.345 [2024-07-25 01:25:17.194262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.345 [2024-07-25 01:25:17.194270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.345 [2024-07-25 01:25:17.194278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:76912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.345 [2024-07-25 01:25:17.194285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.345 [2024-07-25 01:25:17.194293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.345 [2024-07-25 01:25:17.194299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.345 [2024-07-25 01:25:17.194307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.345 [2024-07-25 01:25:17.194313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.345 [2024-07-25 01:25:17.194321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.345 [2024-07-25 01:25:17.194327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.345 [2024-07-25 01:25:17.194336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.345 [2024-07-25 01:25:17.194343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.345 [2024-07-25 01:25:17.194352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:76360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.345 [2024-07-25 01:25:17.194358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.345 [2024-07-25 01:25:17.194366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:76368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.345 [2024-07-25 01:25:17.194372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.345 [2024-07-25 01:25:17.194380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:76376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.345 [2024-07-25 01:25:17.194387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.345 [2024-07-25 01:25:17.194395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:76384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.345 [2024-07-25 01:25:17.194401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.345 [2024-07-25 01:25:17.194409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:76392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.345 [2024-07-25 01:25:17.194415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.345 [2024-07-25 01:25:17.194423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:76400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.345 [2024-07-25 01:25:17.194430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.345 [2024-07-25 01:25:17.194438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.345 [2024-07-25 01:25:17.194444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.345 [2024-07-25 01:25:17.194454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:76416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.345 [2024-07-25 01:25:17.194461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.345 [2024-07-25 01:25:17.194469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:76424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.345 [2024-07-25 01:25:17.194475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.345 [2024-07-25 01:25:17.194483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:76432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.345 [2024-07-25 01:25:17.194489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.345 [2024-07-25 01:25:17.194497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:76440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.345 [2024-07-25 01:25:17.194503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.345 [2024-07-25 01:25:17.194511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:76448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.345 [2024-07-25 01:25:17.194518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.345 [2024-07-25 01:25:17.194526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:76456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.345 [2024-07-25 01:25:17.194532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.345 [2024-07-25 01:25:17.194540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:76464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.345 [2024-07-25 01:25:17.194546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.345 [2024-07-25 01:25:17.194554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:76472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.345 [2024-07-25 01:25:17.194561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.345 [2024-07-25 01:25:17.194569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.345 [2024-07-25 01:25:17.194576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.345 [2024-07-25 01:25:17.194584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:76488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.345 [2024-07-25 01:25:17.194590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.345 [2024-07-25 01:25:17.194598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:76496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.345 [2024-07-25 01:25:17.194604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.346 [2024-07-25 01:25:17.194612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:76504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.346 [2024-07-25 01:25:17.194618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.346 [2024-07-25 01:25:17.194626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.346 [2024-07-25 01:25:17.194632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.346 [2024-07-25 01:25:17.194641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:76520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.346 [2024-07-25 01:25:17.194648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.346 [2024-07-25 01:25:17.194656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:76528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.346 [2024-07-25 01:25:17.194662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.346 [2024-07-25 01:25:17.194670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:76536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.346 [2024-07-25 01:25:17.194676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.346 [2024-07-25 01:25:17.194690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.346 [2024-07-25 01:25:17.194697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.346 [2024-07-25 01:25:17.194705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.346 [2024-07-25 01:25:17.194711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.346 [2024-07-25 01:25:17.194719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.346 [2024-07-25 01:25:17.194726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.346 [2024-07-25 01:25:17.194734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.346 [2024-07-25 01:25:17.194740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.346 [2024-07-25 01:25:17.194748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.346 [2024-07-25 01:25:17.194755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.346 [2024-07-25 01:25:17.194763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.346 [2024-07-25 01:25:17.194769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.346 [2024-07-25 01:25:17.194777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:77000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.346 [2024-07-25 01:25:17.194783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.346 [2024-07-25 01:25:17.194791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:77008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.346 [2024-07-25 01:25:17.194798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.346 [2024-07-25 01:25:17.194806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:77016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.346 [2024-07-25 01:25:17.194812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.346 [2024-07-25 01:25:17.194820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:77024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.346 [2024-07-25 01:25:17.194829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.346 [2024-07-25 01:25:17.194837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.346 [2024-07-25 01:25:17.194844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.346 [2024-07-25 01:25:17.194852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.346 [2024-07-25 01:25:17.194858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.346 [2024-07-25 01:25:17.194866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.346 [2024-07-25 01:25:17.194872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.346 [2024-07-25 01:25:17.194880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.346 [2024-07-25 01:25:17.194887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.346 [2024-07-25 01:25:17.194894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:77064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.346 [2024-07-25 01:25:17.194900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.346 [2024-07-25 01:25:17.194908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.346 [2024-07-25 01:25:17.194915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.346 [2024-07-25 01:25:17.194924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:77080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.346 [2024-07-25 01:25:17.194931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.346 [2024-07-25 01:25:17.194939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.346 [2024-07-25 01:25:17.194945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.346 [2024-07-25 01:25:17.194953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:77096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.346 [2024-07-25 01:25:17.194959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.346 [2024-07-25 01:25:17.194967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:77104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.346 [2024-07-25 01:25:17.194974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.346 [2024-07-25 01:25:17.194981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.346 [2024-07-25 01:25:17.194988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.346 [2024-07-25 01:25:17.194996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.346 [2024-07-25 01:25:17.195002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.346 [2024-07-25 01:25:17.195011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.346 [2024-07-25 01:25:17.195017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.346 [2024-07-25 01:25:17.195025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:77136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.346 [2024-07-25 01:25:17.195032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.346 [2024-07-25 01:25:17.195040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:77144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.346 [2024-07-25 01:25:17.195050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.346 [2024-07-25 01:25:17.195058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.346 [2024-07-25 01:25:17.195065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.346 [2024-07-25 01:25:17.195072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:77160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.346 [2024-07-25 01:25:17.195079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.346 [2024-07-25 01:25:17.195087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:77168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.346 [2024-07-25 01:25:17.195093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.346 [2024-07-25 01:25:17.195101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:77176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.346 [2024-07-25 01:25:17.195108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.346 [2024-07-25 01:25:17.195116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.346 [2024-07-25 01:25:17.195123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.346 [2024-07-25 01:25:17.195131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:77192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.346 [2024-07-25 01:25:17.195137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.346 [2024-07-25 01:25:17.195145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:77200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.346 [2024-07-25 01:25:17.195152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.346 [2024-07-25 01:25:17.195172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.346 [2024-07-25 01:25:17.195179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77208 len:8 PRP1 0x0 PRP2 0x0 00:25:01.346 [2024-07-25 01:25:17.195185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.346 [2024-07-25 01:25:17.195194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:01.346 [2024-07-25 01:25:17.195199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.347 [2024-07-25 01:25:17.195205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77216 len:8 PRP1 0x0 PRP2 0x0 00:25:01.347 [2024-07-25 01:25:17.195211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.347 [2024-07-25 01:25:17.195253] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x7c0080 was disconnected and freed. reset controller. 00:25:01.347 [2024-07-25 01:25:17.195262] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:01.347 [2024-07-25 01:25:17.195281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.347 [2024-07-25 01:25:17.195288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.347 [2024-07-25 01:25:17.195296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.347 [2024-07-25 01:25:17.195302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.347 [2024-07-25 01:25:17.195309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.347 [2024-07-25 01:25:17.195315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.347 [2024-07-25 01:25:17.195322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.347 [2024-07-25 01:25:17.195329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.347 [2024-07-25 01:25:17.195335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.347 [2024-07-25 01:25:17.195363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f5540 (9): Bad file descriptor 00:25:01.347 [2024-07-25 01:25:17.198192] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.347 [2024-07-25 01:25:17.316556] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:01.347 00:25:01.347 Latency(us) 00:25:01.347 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.347 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:01.347 Verification LBA range: start 0x0 length 0x4000 00:25:01.347 NVMe0n1 : 15.01 11204.93 43.77 823.29 0.00 10618.68 1296.47 25416.57 00:25:01.347 =================================================================================================================== 00:25:01.347 Total : 11204.93 43.77 823.29 0.00 10618.68 1296.47 25416.57 00:25:01.347 Received shutdown signal, test time was about 15.000000 seconds 00:25:01.347 00:25:01.347 Latency(us) 00:25:01.347 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.347 =================================================================================================================== 00:25:01.347 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:01.347 01:25:23 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:01.347 01:25:23 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:01.347 01:25:23 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:01.347 01:25:23 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=997988 00:25:01.347 01:25:23 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:01.347 01:25:23 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 997988 /var/tmp/bdevperf.sock 00:25:01.347 01:25:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 997988 ']' 00:25:01.347 01:25:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:01.347 01:25:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:01.347 01:25:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:01.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:01.347 01:25:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:01.347 01:25:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:01.911 01:25:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:01.911 01:25:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:01.911 01:25:24 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:02.169 [2024-07-25 01:25:24.421759] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:02.169 01:25:24 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:02.169 [2024-07-25 01:25:24.598264] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:02.169 01:25:24 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:02.734 NVMe0n1 00:25:02.734 01:25:24 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:02.992 00:25:02.992 01:25:25 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:03.250 00:25:03.508 01:25:25 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:03.508 01:25:25 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:03.508 01:25:25 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:03.766 01:25:26 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:07.044 01:25:29 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:07.044 01:25:29 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:07.044 01:25:29 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:07.044 01:25:29 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=998919 00:25:07.044 01:25:29 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 998919 00:25:07.979 0 00:25:07.979 01:25:30 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:07.979 [2024-07-25 01:25:23.455807] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:25:07.979 [2024-07-25 01:25:23.455858] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid997988 ] 00:25:07.979 EAL: No free 2048 kB hugepages reported on node 1 00:25:07.979 [2024-07-25 01:25:23.509405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:07.979 [2024-07-25 01:25:23.577935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:07.979 [2024-07-25 01:25:26.084973] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:07.979 [2024-07-25 01:25:26.085016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.979 [2024-07-25 01:25:26.085026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.979 [2024-07-25 01:25:26.085035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.979 [2024-07-25 01:25:26.085046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.979 [2024-07-25 01:25:26.085053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.979 [2024-07-25 01:25:26.085060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.979 [2024-07-25 01:25:26.085067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.979 [2024-07-25 01:25:26.085074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.979 [2024-07-25 01:25:26.085081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:07.979 [2024-07-25 01:25:26.085107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:07.979 [2024-07-25 01:25:26.085120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c65540 (9): Bad file descriptor 00:25:07.979 [2024-07-25 01:25:26.090735] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:07.979 Running I/O for 1 seconds... 00:25:07.979 00:25:07.979 Latency(us) 00:25:07.979 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.979 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:07.979 Verification LBA range: start 0x0 length 0x4000 00:25:07.979 NVMe0n1 : 1.01 11328.33 44.25 0.00 0.00 11237.39 1417.57 30089.57 00:25:07.979 =================================================================================================================== 00:25:07.979 Total : 11328.33 44.25 0.00 0.00 11237.39 1417.57 30089.57 00:25:07.979 01:25:30 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:07.979 01:25:30 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:08.237 01:25:30 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:08.494 01:25:30 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:08.494 01:25:30 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:08.494 01:25:30 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:08.785 01:25:31 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:12.064 01:25:34 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:12.064 01:25:34 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:12.064 01:25:34 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 997988 00:25:12.064 01:25:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 997988 ']' 00:25:12.064 01:25:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 997988 00:25:12.064 01:25:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:12.064 01:25:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:12.064 01:25:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 997988 00:25:12.064 01:25:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:12.064 01:25:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:12.064 01:25:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 997988' 00:25:12.064 killing process with pid 997988 00:25:12.064 01:25:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 997988 00:25:12.064 01:25:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 997988 00:25:12.321 01:25:34 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:12.321 01:25:34 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:12.321 01:25:34 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:12.321 01:25:34 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:12.321 01:25:34 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:12.321 01:25:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:12.321 01:25:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:25:12.321 01:25:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:12.321 01:25:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:25:12.321 01:25:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:12.321 01:25:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:12.321 rmmod nvme_tcp 00:25:12.321 rmmod nvme_fabrics 00:25:12.321 rmmod nvme_keyring 00:25:12.579 01:25:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:12.579 01:25:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:25:12.579 01:25:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:25:12.579 01:25:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 994967 ']' 00:25:12.579 01:25:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 994967 00:25:12.579 01:25:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 994967 ']' 00:25:12.579 01:25:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 994967 00:25:12.579 01:25:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:12.580 01:25:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:12.580 01:25:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 994967 00:25:12.580 01:25:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:12.580 01:25:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:12.580 01:25:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 994967' 00:25:12.580 killing process with pid 994967 00:25:12.580 01:25:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 994967 00:25:12.580 01:25:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 994967 00:25:12.837 01:25:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:12.837 01:25:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:12.837 01:25:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:12.837 01:25:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:12.837 01:25:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:12.838 01:25:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:12.838 01:25:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:12.838 01:25:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.745 01:25:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:14.745 00:25:14.745 real 0m38.396s 00:25:14.745 user 2m3.673s 00:25:14.745 sys 0m7.572s 00:25:14.745 01:25:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:14.745 01:25:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:14.745 ************************************ 00:25:14.745 END TEST nvmf_failover 00:25:14.745 ************************************ 00:25:14.745 01:25:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:14.745 01:25:37 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:14.745 01:25:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:14.745 01:25:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:14.745 01:25:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:14.745 ************************************ 00:25:14.745 START TEST nvmf_host_discovery 00:25:14.745 ************************************ 00:25:14.745 01:25:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:15.004 * Looking for test storage... 00:25:15.004 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:15.004 01:25:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:15.004 01:25:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:15.004 01:25:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:15.004 01:25:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:15.004 01:25:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:15.004 01:25:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:15.004 01:25:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:15.004 01:25:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:15.004 01:25:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:15.004 01:25:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:15.004 01:25:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:15.004 01:25:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:15.004 01:25:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:15.004 01:25:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:15.004 01:25:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:15.004 01:25:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:15.004 01:25:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:15.004 01:25:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:15.004 01:25:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:15.004 01:25:37 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:15.004 01:25:37 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:15.004 01:25:37 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:15.004 01:25:37 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.004 01:25:37 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.004 01:25:37 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.004 01:25:37 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:15.004 01:25:37 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.004 01:25:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:25:15.004 01:25:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:15.004 01:25:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:15.004 01:25:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:15.004 01:25:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:15.004 01:25:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:15.004 01:25:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:15.004 01:25:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:15.005 01:25:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:15.005 01:25:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:15.005 01:25:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:15.005 01:25:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:15.005 01:25:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:15.005 01:25:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:15.005 01:25:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:15.005 01:25:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:15.005 01:25:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:15.005 01:25:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:15.005 01:25:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:15.005 01:25:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:15.005 01:25:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:15.005 01:25:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:15.005 01:25:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:15.005 01:25:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.005 01:25:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:15.005 01:25:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:15.005 01:25:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:25:15.005 01:25:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:20.315 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:20.315 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:20.315 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.316 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:20.316 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:20.316 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:20.316 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:20.316 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.316 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:20.316 Found net devices under 0000:86:00.0: cvl_0_0 00:25:20.316 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.316 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:20.316 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.316 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:20.316 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:20.316 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:20.316 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:20.316 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.316 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:20.316 Found net devices under 0000:86:00.1: cvl_0_1 00:25:20.316 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.316 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:20.316 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:25:20.316 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:20.316 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:20.316 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:20.316 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:20.316 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:20.316 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:20.316 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:20.316 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:20.316 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:20.316 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:20.316 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:20.316 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:20.316 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:20.316 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:20.316 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:20.316 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:20.316 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:20.316 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:20.316 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:20.316 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:20.574 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:20.574 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:20.574 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:20.574 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:20.574 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:25:20.574 00:25:20.574 --- 10.0.0.2 ping statistics --- 00:25:20.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.574 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:25:20.574 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:20.574 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:20.574 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:25:20.574 00:25:20.574 --- 10.0.0.1 ping statistics --- 00:25:20.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.574 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:25:20.574 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:20.574 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:25:20.574 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:20.574 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:20.574 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:20.574 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:20.574 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:20.574 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:20.574 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:20.574 01:25:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:20.574 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:20.574 01:25:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:20.574 01:25:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.574 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1003353 00:25:20.574 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1003353 00:25:20.574 01:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:20.574 01:25:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1003353 ']' 00:25:20.574 01:25:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:20.574 01:25:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:20.574 01:25:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:20.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:20.574 01:25:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:20.574 01:25:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.574 [2024-07-25 01:25:42.949591] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:25:20.574 [2024-07-25 01:25:42.949633] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:20.574 EAL: No free 2048 kB hugepages reported on node 1 00:25:20.574 [2024-07-25 01:25:43.005411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:20.832 [2024-07-25 01:25:43.085266] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:20.832 [2024-07-25 01:25:43.085299] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:20.832 [2024-07-25 01:25:43.085306] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:20.832 [2024-07-25 01:25:43.085312] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:20.832 [2024-07-25 01:25:43.085317] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:20.833 [2024-07-25 01:25:43.085336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:21.397 01:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:21.397 01:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:25:21.397 01:25:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:21.397 01:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:21.397 01:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.397 01:25:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:21.397 01:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:21.397 01:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.397 01:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.397 [2024-07-25 01:25:43.807730] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:21.397 01:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.397 01:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:21.397 01:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.397 01:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.397 [2024-07-25 01:25:43.819904] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:21.397 01:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.397 01:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:21.397 01:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.397 01:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.397 null0 00:25:21.397 01:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.397 01:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:21.397 01:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.397 01:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.397 null1 00:25:21.397 01:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.397 01:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:21.397 01:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.397 01:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.397 01:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.397 01:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1003518 00:25:21.397 01:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:21.397 01:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1003518 /tmp/host.sock 00:25:21.397 01:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1003518 ']' 00:25:21.397 01:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:25:21.397 01:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:21.397 01:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:21.397 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:21.397 01:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:21.397 01:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.656 [2024-07-25 01:25:43.897320] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:25:21.656 [2024-07-25 01:25:43.897362] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1003518 ] 00:25:21.656 EAL: No free 2048 kB hugepages reported on node 1 00:25:21.656 [2024-07-25 01:25:43.951300] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.656 [2024-07-25 01:25:44.031550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:22.222 01:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:22.222 01:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:25:22.222 01:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:22.222 01:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:22.222 01:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.222 01:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.222 01:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.222 01:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:22.222 01:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.222 01:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:22.481 01:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.739 01:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:22.739 01:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:22.739 01:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:22.739 01:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.739 01:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.739 01:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:22.739 01:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:22.739 01:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.739 [2024-07-25 01:25:45.047129] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:22.739 01:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.997 01:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:25:22.997 01:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:25:23.255 [2024-07-25 01:25:45.728350] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:23.255 [2024-07-25 01:25:45.728369] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:23.255 [2024-07-25 01:25:45.728382] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:23.514 [2024-07-25 01:25:45.858779] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:23.514 [2024-07-25 01:25:45.961165] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:23.514 [2024-07-25 01:25:45.961182] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:23.772 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:23.772 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:23.772 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:23.772 01:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:23.772 01:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:23.772 01:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:23.772 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.030 01:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:24.030 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:24.030 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.030 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.030 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:24.030 01:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:24.030 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:24.030 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:24.030 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:24.030 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:24.030 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:24.030 01:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:24.030 01:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:24.030 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.030 01:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:24.030 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:24.030 01:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:24.030 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.030 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:24.030 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:24.030 01:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:24.030 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:24.030 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:24.030 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:24.030 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:24.030 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:24.030 01:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:24.030 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.030 01:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:24.030 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:24.031 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.290 01:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:24.290 01:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:24.290 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:24.290 01:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.227 [2024-07-25 01:25:47.586147] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:25.227 [2024-07-25 01:25:47.586873] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:25.227 [2024-07-25 01:25:47.586894] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:25.227 01:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.227 [2024-07-25 01:25:47.716294] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:25.486 01:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:25.486 01:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:25:25.486 [2024-07-25 01:25:47.941562] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:25.486 [2024-07-25 01:25:47.941579] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:25.486 [2024-07-25 01:25:47.941584] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:26.424 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:26.424 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:26.424 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:26.424 01:25:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:26.424 01:25:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.425 [2024-07-25 01:25:48.854948] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:26.425 [2024-07-25 01:25:48.854969] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:26.425 [2024-07-25 01:25:48.855934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:26.425 [2024-07-25 01:25:48.855948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.425 [2024-07-25 01:25:48.855956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:26.425 [2024-07-25 01:25:48.855963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.425 [2024-07-25 01:25:48.855970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:26.425 [2024-07-25 01:25:48.855977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.425 [2024-07-25 01:25:48.855986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:26.425 [2024-07-25 01:25:48.855993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.425 [2024-07-25 01:25:48.855999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1980f20 is same with the state(5) to be set 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:26.425 [2024-07-25 01:25:48.866056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1980f20 (9): Bad file descriptor 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.425 [2024-07-25 01:25:48.876092] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:26.425 [2024-07-25 01:25:48.876581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:26.425 [2024-07-25 01:25:48.876596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1980f20 with addr=10.0.0.2, port=4420 00:25:26.425 [2024-07-25 01:25:48.876604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1980f20 is same with the state(5) to be set 00:25:26.425 [2024-07-25 01:25:48.876623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1980f20 (9): Bad file descriptor 00:25:26.425 [2024-07-25 01:25:48.876633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:26.425 [2024-07-25 01:25:48.876639] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:26.425 [2024-07-25 01:25:48.876646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:26.425 [2024-07-25 01:25:48.876657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:26.425 [2024-07-25 01:25:48.886153] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:26.425 [2024-07-25 01:25:48.886674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:26.425 [2024-07-25 01:25:48.886687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1980f20 with addr=10.0.0.2, port=4420 00:25:26.425 [2024-07-25 01:25:48.886694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1980f20 is same with the state(5) to be set 00:25:26.425 [2024-07-25 01:25:48.886704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1980f20 (9): Bad file descriptor 00:25:26.425 [2024-07-25 01:25:48.886714] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:26.425 [2024-07-25 01:25:48.886720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:26.425 [2024-07-25 01:25:48.886727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:26.425 [2024-07-25 01:25:48.886737] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:26.425 [2024-07-25 01:25:48.896201] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:26.425 [2024-07-25 01:25:48.896632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:26.425 [2024-07-25 01:25:48.896644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1980f20 with addr=10.0.0.2, port=4420 00:25:26.425 [2024-07-25 01:25:48.896651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1980f20 is same with the state(5) to be set 00:25:26.425 [2024-07-25 01:25:48.896660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1980f20 (9): Bad file descriptor 00:25:26.425 [2024-07-25 01:25:48.896669] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:26.425 [2024-07-25 01:25:48.896675] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:26.425 [2024-07-25 01:25:48.896682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:26.425 [2024-07-25 01:25:48.896691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:26.425 [2024-07-25 01:25:48.906248] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:26.425 [2024-07-25 01:25:48.906475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:26.425 [2024-07-25 01:25:48.906489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1980f20 with addr=10.0.0.2, port=4420 00:25:26.425 [2024-07-25 01:25:48.906496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1980f20 is same with the state(5) to be set 00:25:26.425 [2024-07-25 01:25:48.906506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1980f20 (9): Bad file descriptor 00:25:26.425 [2024-07-25 01:25:48.906516] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:26.425 [2024-07-25 01:25:48.906522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:26.425 [2024-07-25 01:25:48.906529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:26.425 [2024-07-25 01:25:48.906538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:26.425 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:26.426 [2024-07-25 01:25:48.916301] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:26.685 01:25:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:26.685 01:25:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:26.685 01:25:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:26.685 [2024-07-25 01:25:48.917363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:26.685 [2024-07-25 01:25:48.917385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1980f20 with addr=10.0.0.2, port=4420 00:25:26.685 [2024-07-25 01:25:48.917394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1980f20 is same with the state(5) to be set 00:25:26.685 [2024-07-25 01:25:48.917409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1980f20 (9): Bad file descriptor 00:25:26.685 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.685 [2024-07-25 01:25:48.917433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:26.685 [2024-07-25 01:25:48.917444] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:26.685 [2024-07-25 01:25:48.917452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:26.685 [2024-07-25 01:25:48.917463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:26.685 01:25:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:26.685 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.685 [2024-07-25 01:25:48.926354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:26.685 [2024-07-25 01:25:48.926743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:26.685 [2024-07-25 01:25:48.926760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1980f20 with addr=10.0.0.2, port=4420 00:25:26.685 [2024-07-25 01:25:48.926767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1980f20 is same with the state(5) to be set 00:25:26.685 [2024-07-25 01:25:48.926778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1980f20 (9): Bad file descriptor 00:25:26.685 [2024-07-25 01:25:48.926794] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:26.685 [2024-07-25 01:25:48.926801] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:26.685 [2024-07-25 01:25:48.926808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:26.685 [2024-07-25 01:25:48.926817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:26.685 [2024-07-25 01:25:48.936407] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:26.685 [2024-07-25 01:25:48.936904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:26.685 [2024-07-25 01:25:48.936916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1980f20 with addr=10.0.0.2, port=4420 00:25:26.685 [2024-07-25 01:25:48.936923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1980f20 is same with the state(5) to be set 00:25:26.685 [2024-07-25 01:25:48.936933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1980f20 (9): Bad file descriptor 00:25:26.685 [2024-07-25 01:25:48.936949] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:26.685 [2024-07-25 01:25:48.936955] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:26.685 [2024-07-25 01:25:48.936962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:26.685 [2024-07-25 01:25:48.936971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:26.685 [2024-07-25 01:25:48.942967] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:26.685 [2024-07-25 01:25:48.942982] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:26.685 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.686 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:26.686 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:26.686 01:25:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:26.686 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:26.686 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:26.686 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:26.686 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:26.686 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:26.686 01:25:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:26.686 01:25:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:26.686 01:25:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:26.686 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.686 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.686 01:25:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:26.686 01:25:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:26.686 01:25:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:26.946 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:26.946 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:26.946 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:26.946 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:26.946 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:26.946 01:25:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:26.946 01:25:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:26.946 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.946 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.946 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.946 01:25:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:26.946 01:25:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:26.946 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:26.946 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:26.946 01:25:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:26.946 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.946 01:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.886 [2024-07-25 01:25:50.281072] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:27.886 [2024-07-25 01:25:50.281096] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:27.886 [2024-07-25 01:25:50.281109] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:28.146 [2024-07-25 01:25:50.409481] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:28.405 [2024-07-25 01:25:50.682081] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:28.405 [2024-07-25 01:25:50.682119] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:28.405 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.405 01:25:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:28.405 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:28.405 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:28.405 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:28.405 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:28.405 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:28.405 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:28.405 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:28.405 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.405 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.405 request: 00:25:28.405 { 00:25:28.405 "name": "nvme", 00:25:28.405 "trtype": "tcp", 00:25:28.405 "traddr": "10.0.0.2", 00:25:28.405 "adrfam": "ipv4", 00:25:28.405 "trsvcid": "8009", 00:25:28.405 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:28.405 "wait_for_attach": true, 00:25:28.405 "method": "bdev_nvme_start_discovery", 00:25:28.405 "req_id": 1 00:25:28.405 } 00:25:28.405 Got JSON-RPC error response 00:25:28.405 response: 00:25:28.405 { 00:25:28.405 "code": -17, 00:25:28.405 "message": "File exists" 00:25:28.405 } 00:25:28.405 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:28.405 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:28.405 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:28.405 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:28.405 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:28.405 01:25:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:28.405 01:25:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:28.405 01:25:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:28.405 01:25:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:28.405 01:25:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:28.405 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.405 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.406 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.406 01:25:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:28.406 01:25:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:28.406 01:25:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:28.406 01:25:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:28.406 01:25:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:28.406 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.406 01:25:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:28.406 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.406 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.406 01:25:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:28.406 01:25:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:28.406 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:28.406 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:28.406 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:28.406 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:28.406 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:28.406 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:28.406 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:28.406 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.406 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.406 request: 00:25:28.406 { 00:25:28.406 "name": "nvme_second", 00:25:28.406 "trtype": "tcp", 00:25:28.406 "traddr": "10.0.0.2", 00:25:28.406 "adrfam": "ipv4", 00:25:28.406 "trsvcid": "8009", 00:25:28.406 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:28.406 "wait_for_attach": true, 00:25:28.406 "method": "bdev_nvme_start_discovery", 00:25:28.406 "req_id": 1 00:25:28.406 } 00:25:28.406 Got JSON-RPC error response 00:25:28.406 response: 00:25:28.406 { 00:25:28.406 "code": -17, 00:25:28.406 "message": "File exists" 00:25:28.406 } 00:25:28.406 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:28.406 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:28.406 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:28.406 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:28.406 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:28.406 01:25:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:28.406 01:25:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:28.406 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.406 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.406 01:25:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:28.406 01:25:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:28.406 01:25:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:28.406 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.406 01:25:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:28.406 01:25:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:28.406 01:25:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:28.406 01:25:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:28.406 01:25:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:28.406 01:25:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:28.406 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.406 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.665 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.665 01:25:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:28.665 01:25:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:28.665 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:28.665 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:28.665 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:28.665 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:28.665 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:28.665 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:28.665 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:28.665 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.665 01:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.603 [2024-07-25 01:25:51.921857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.603 [2024-07-25 01:25:51.921884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199be50 with addr=10.0.0.2, port=8010 00:25:29.603 [2024-07-25 01:25:51.921898] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:29.603 [2024-07-25 01:25:51.921904] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:29.603 [2024-07-25 01:25:51.921910] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:30.542 [2024-07-25 01:25:52.924312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.542 [2024-07-25 01:25:52.924335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199be50 with addr=10.0.0.2, port=8010 00:25:30.542 [2024-07-25 01:25:52.924346] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:30.542 [2024-07-25 01:25:52.924352] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:30.542 [2024-07-25 01:25:52.924374] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:31.479 [2024-07-25 01:25:53.926183] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:31.479 request: 00:25:31.479 { 00:25:31.479 "name": "nvme_second", 00:25:31.479 "trtype": "tcp", 00:25:31.479 "traddr": "10.0.0.2", 00:25:31.479 "adrfam": "ipv4", 00:25:31.479 "trsvcid": "8010", 00:25:31.479 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:31.479 "wait_for_attach": false, 00:25:31.479 "attach_timeout_ms": 3000, 00:25:31.479 "method": "bdev_nvme_start_discovery", 00:25:31.479 "req_id": 1 00:25:31.479 } 00:25:31.479 Got JSON-RPC error response 00:25:31.479 response: 00:25:31.479 { 00:25:31.479 "code": -110, 00:25:31.479 "message": "Connection timed out" 00:25:31.479 } 00:25:31.479 01:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:31.479 01:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:31.479 01:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:31.479 01:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:31.480 01:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:31.480 01:25:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:31.480 01:25:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:31.480 01:25:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:31.480 01:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.480 01:25:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:31.480 01:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.480 01:25:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:31.480 01:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.813 01:25:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:31.813 01:25:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:31.813 01:25:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1003518 00:25:31.813 01:25:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:31.813 01:25:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:31.813 01:25:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:25:31.813 01:25:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:31.813 01:25:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:25:31.813 01:25:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:31.813 01:25:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:31.813 rmmod nvme_tcp 00:25:31.813 rmmod nvme_fabrics 00:25:31.813 rmmod nvme_keyring 00:25:31.813 01:25:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:31.813 01:25:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:25:31.813 01:25:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:25:31.813 01:25:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1003353 ']' 00:25:31.813 01:25:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1003353 00:25:31.813 01:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 1003353 ']' 00:25:31.813 01:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 1003353 00:25:31.813 01:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:25:31.813 01:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:31.813 01:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1003353 00:25:31.813 01:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:31.813 01:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:31.814 01:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1003353' 00:25:31.814 killing process with pid 1003353 00:25:31.814 01:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 1003353 00:25:31.814 01:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 1003353 00:25:31.814 01:25:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:31.814 01:25:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:31.814 01:25:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:31.814 01:25:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:31.814 01:25:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:31.814 01:25:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.814 01:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:32.073 01:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.984 01:25:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:33.984 00:25:33.984 real 0m19.115s 00:25:33.984 user 0m24.802s 00:25:33.984 sys 0m5.519s 00:25:33.984 01:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:33.984 01:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.984 ************************************ 00:25:33.984 END TEST nvmf_host_discovery 00:25:33.984 ************************************ 00:25:33.984 01:25:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:33.984 01:25:56 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:33.984 01:25:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:33.984 01:25:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:33.984 01:25:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:33.984 ************************************ 00:25:33.984 START TEST nvmf_host_multipath_status 00:25:33.984 ************************************ 00:25:33.984 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:34.245 * Looking for test storage... 00:25:34.245 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:34.245 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:34.245 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:34.245 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:34.245 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:34.245 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:34.245 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:34.245 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:34.245 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:34.245 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:34.245 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:34.245 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:34.245 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:34.245 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:34.245 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:34.245 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:34.245 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:34.245 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:34.245 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:34.245 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:34.245 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:34.245 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:34.245 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:34.245 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.246 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.246 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.246 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:34.246 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.246 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:25:34.246 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:34.246 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:34.246 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:34.246 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:34.246 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:34.246 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:34.246 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:34.246 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:34.246 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:34.246 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:34.246 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:34.246 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:34.246 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:34.246 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:34.246 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:34.246 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:34.246 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:34.246 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:34.246 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:34.246 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:34.246 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:34.246 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:34.246 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:34.246 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:34.246 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:34.246 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:25:34.246 01:25:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:39.532 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:39.532 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:39.532 Found net devices under 0000:86:00.0: cvl_0_0 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:39.532 Found net devices under 0000:86:00.1: cvl_0_1 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:39.532 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:39.533 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:39.533 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:39.533 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:39.533 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:39.533 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:39.533 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:39.533 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:39.533 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:39.533 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:39.533 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:39.533 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:39.533 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:39.533 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:39.533 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:39.533 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:39.533 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:39.533 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:39.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:39.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:25:39.533 00:25:39.533 --- 10.0.0.2 ping statistics --- 00:25:39.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.533 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:25:39.533 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:39.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:39.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.368 ms 00:25:39.533 00:25:39.533 --- 10.0.0.1 ping statistics --- 00:25:39.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.533 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:25:39.533 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:39.533 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:25:39.533 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:39.533 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:39.533 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:39.533 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:39.533 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:39.533 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:39.533 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:39.533 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:39.533 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:39.533 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:39.533 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:39.533 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1008676 00:25:39.533 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1008676 00:25:39.533 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1008676 ']' 00:25:39.533 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:39.533 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:39.533 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:39.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:39.533 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:39.533 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:39.533 01:26:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:39.533 [2024-07-25 01:26:01.770934] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:25:39.533 [2024-07-25 01:26:01.770979] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:39.533 EAL: No free 2048 kB hugepages reported on node 1 00:25:39.533 [2024-07-25 01:26:01.826866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:39.533 [2024-07-25 01:26:01.905991] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:39.533 [2024-07-25 01:26:01.906025] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:39.533 [2024-07-25 01:26:01.906032] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:39.533 [2024-07-25 01:26:01.906037] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:39.533 [2024-07-25 01:26:01.906045] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:39.533 [2024-07-25 01:26:01.906083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:39.533 [2024-07-25 01:26:01.906085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.103 01:26:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:40.103 01:26:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:25:40.103 01:26:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:40.103 01:26:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:40.103 01:26:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:40.103 01:26:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:40.103 01:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1008676 00:25:40.103 01:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:40.363 [2024-07-25 01:26:02.735032] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:40.363 01:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:40.623 Malloc0 00:25:40.623 01:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:40.884 01:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:40.884 01:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:41.143 [2024-07-25 01:26:03.441202] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:41.143 01:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:41.143 [2024-07-25 01:26:03.621672] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:41.402 01:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1009093 00:25:41.402 01:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:41.402 01:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:41.402 01:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1009093 /var/tmp/bdevperf.sock 00:25:41.402 01:26:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1009093 ']' 00:25:41.402 01:26:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:41.402 01:26:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:41.402 01:26:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:41.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:41.402 01:26:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:41.402 01:26:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:42.121 01:26:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:42.121 01:26:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:25:42.121 01:26:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:42.381 01:26:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:25:42.641 Nvme0n1 00:25:42.641 01:26:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:43.212 Nvme0n1 00:25:43.212 01:26:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:43.212 01:26:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:45.122 01:26:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:45.122 01:26:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:45.382 01:26:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:45.382 01:26:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:46.762 01:26:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:46.762 01:26:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:46.762 01:26:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.762 01:26:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:46.762 01:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:46.762 01:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:46.762 01:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:46.762 01:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.762 01:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:46.762 01:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:46.762 01:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.762 01:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:47.020 01:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.020 01:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:47.020 01:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.020 01:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:47.280 01:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.280 01:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:47.280 01:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.280 01:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:47.280 01:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.280 01:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:47.280 01:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.280 01:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:47.539 01:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.539 01:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:47.539 01:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:47.798 01:26:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:48.058 01:26:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:48.997 01:26:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:48.997 01:26:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:48.997 01:26:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.997 01:26:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:49.257 01:26:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:49.257 01:26:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:49.258 01:26:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.258 01:26:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:49.258 01:26:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:49.258 01:26:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:49.258 01:26:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.258 01:26:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:49.517 01:26:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:49.517 01:26:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:49.518 01:26:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.518 01:26:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:49.778 01:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:49.778 01:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:49.778 01:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.778 01:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:49.778 01:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:49.778 01:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:49.778 01:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.778 01:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:50.038 01:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.038 01:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:50.038 01:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:50.299 01:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:50.558 01:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:51.497 01:26:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:51.497 01:26:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:51.497 01:26:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.497 01:26:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:51.757 01:26:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.757 01:26:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:51.757 01:26:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.757 01:26:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:51.757 01:26:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:51.757 01:26:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:51.757 01:26:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.757 01:26:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:52.016 01:26:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.017 01:26:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:52.017 01:26:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.017 01:26:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:52.277 01:26:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.277 01:26:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:52.277 01:26:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.277 01:26:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:52.277 01:26:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.277 01:26:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:52.277 01:26:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.277 01:26:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:52.537 01:26:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.537 01:26:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:52.537 01:26:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:52.798 01:26:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:53.058 01:26:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:53.996 01:26:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:53.996 01:26:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:53.996 01:26:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.996 01:26:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:54.257 01:26:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.257 01:26:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:54.257 01:26:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.257 01:26:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:54.257 01:26:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:54.257 01:26:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:54.257 01:26:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.257 01:26:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:54.517 01:26:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.517 01:26:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:54.517 01:26:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.517 01:26:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:54.777 01:26:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.777 01:26:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:54.777 01:26:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.777 01:26:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:54.777 01:26:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.777 01:26:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:54.777 01:26:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.777 01:26:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:55.037 01:26:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:55.037 01:26:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:55.037 01:26:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:55.297 01:26:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:55.557 01:26:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:56.497 01:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:56.497 01:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:56.497 01:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.497 01:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:56.497 01:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:56.497 01:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:56.497 01:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.497 01:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:56.757 01:26:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:56.757 01:26:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:56.757 01:26:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.757 01:26:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:57.016 01:26:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.016 01:26:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:57.016 01:26:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.016 01:26:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:57.275 01:26:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.275 01:26:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:57.275 01:26:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.276 01:26:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:57.276 01:26:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:57.276 01:26:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:57.276 01:26:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.276 01:26:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:57.537 01:26:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:57.537 01:26:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:57.537 01:26:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:57.537 01:26:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:57.797 01:26:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:58.738 01:26:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:58.738 01:26:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:58.738 01:26:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.738 01:26:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:58.998 01:26:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:58.998 01:26:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:58.998 01:26:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.998 01:26:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:59.258 01:26:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.258 01:26:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:59.258 01:26:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.258 01:26:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:59.518 01:26:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.518 01:26:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:59.518 01:26:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.518 01:26:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:59.518 01:26:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.518 01:26:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:59.518 01:26:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.518 01:26:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:59.778 01:26:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:59.778 01:26:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:59.778 01:26:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.778 01:26:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:00.038 01:26:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.038 01:26:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:00.038 01:26:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:00.038 01:26:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:00.297 01:26:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:00.555 01:26:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:01.496 01:26:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:01.496 01:26:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:01.496 01:26:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.496 01:26:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:01.755 01:26:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.755 01:26:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:01.755 01:26:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:01.755 01:26:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.016 01:26:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.016 01:26:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:02.016 01:26:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.016 01:26:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:02.016 01:26:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.016 01:26:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:02.016 01:26:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.016 01:26:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:02.276 01:26:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.276 01:26:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:02.276 01:26:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.276 01:26:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:02.535 01:26:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.535 01:26:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:02.535 01:26:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.536 01:26:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:02.536 01:26:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.536 01:26:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:02.536 01:26:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:02.796 01:26:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:03.056 01:26:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:03.997 01:26:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:03.997 01:26:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:03.997 01:26:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.997 01:26:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:04.258 01:26:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:04.258 01:26:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:04.258 01:26:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.258 01:26:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:04.518 01:26:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:04.518 01:26:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:04.518 01:26:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.518 01:26:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:04.518 01:26:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:04.518 01:26:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:04.518 01:26:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.518 01:26:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:04.778 01:26:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:04.778 01:26:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:04.778 01:26:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.778 01:26:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:05.039 01:26:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.039 01:26:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:05.039 01:26:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.039 01:26:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:05.039 01:26:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.039 01:26:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:05.039 01:26:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:05.299 01:26:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:05.559 01:26:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:06.499 01:26:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:06.499 01:26:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:06.499 01:26:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.499 01:26:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:06.759 01:26:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.759 01:26:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:06.759 01:26:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.759 01:26:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:06.759 01:26:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.759 01:26:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:06.759 01:26:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.759 01:26:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:07.019 01:26:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.019 01:26:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:07.019 01:26:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.019 01:26:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:07.280 01:26:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.280 01:26:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:07.280 01:26:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.280 01:26:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:07.540 01:26:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.540 01:26:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:07.540 01:26:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.540 01:26:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:07.541 01:26:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.541 01:26:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:07.541 01:26:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:07.827 01:26:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:08.089 01:26:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:09.030 01:26:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:09.030 01:26:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:09.030 01:26:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.030 01:26:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:09.291 01:26:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.291 01:26:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:09.291 01:26:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.291 01:26:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:09.291 01:26:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:09.291 01:26:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:09.291 01:26:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.291 01:26:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:09.551 01:26:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.551 01:26:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:09.551 01:26:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.551 01:26:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:09.811 01:26:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.811 01:26:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:09.811 01:26:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.811 01:26:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:09.811 01:26:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.811 01:26:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:09.811 01:26:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.811 01:26:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:10.071 01:26:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:10.071 01:26:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1009093 00:26:10.071 01:26:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1009093 ']' 00:26:10.072 01:26:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1009093 00:26:10.072 01:26:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:26:10.072 01:26:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:10.072 01:26:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1009093 00:26:10.072 01:26:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:26:10.072 01:26:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:26:10.072 01:26:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1009093' 00:26:10.072 killing process with pid 1009093 00:26:10.072 01:26:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1009093 00:26:10.072 01:26:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1009093 00:26:10.346 Connection closed with partial response: 00:26:10.346 00:26:10.346 00:26:10.346 01:26:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1009093 00:26:10.346 01:26:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:10.346 [2024-07-25 01:26:03.683473] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:26:10.346 [2024-07-25 01:26:03.683525] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1009093 ] 00:26:10.346 EAL: No free 2048 kB hugepages reported on node 1 00:26:10.346 [2024-07-25 01:26:03.734812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.346 [2024-07-25 01:26:03.808610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:10.346 Running I/O for 90 seconds... 00:26:10.346 [2024-07-25 01:26:17.596022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:54008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.346 [2024-07-25 01:26:17.596067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:10.346 [2024-07-25 01:26:17.596118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:54016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.346 [2024-07-25 01:26:17.596127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:10.346 [2024-07-25 01:26:17.596140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:54024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.346 [2024-07-25 01:26:17.596148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:10.346 [2024-07-25 01:26:17.596160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:54032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.346 [2024-07-25 01:26:17.596167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:10.346 [2024-07-25 01:26:17.596180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:54040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.346 [2024-07-25 01:26:17.596187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:10.346 [2024-07-25 01:26:17.596199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:54048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.346 [2024-07-25 01:26:17.596206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:10.346 [2024-07-25 01:26:17.596219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:54056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.346 [2024-07-25 01:26:17.596226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:10.346 [2024-07-25 01:26:17.596238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:54064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.346 [2024-07-25 01:26:17.596245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:10.346 [2024-07-25 01:26:17.596257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:53752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.346 [2024-07-25 01:26:17.596264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:10.346 [2024-07-25 01:26:17.596277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:53760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.346 [2024-07-25 01:26:17.596284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:10.346 [2024-07-25 01:26:17.596297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:53768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.346 [2024-07-25 01:26:17.596310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:10.346 [2024-07-25 01:26:17.596322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:53776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.346 [2024-07-25 01:26:17.596330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:10.346 [2024-07-25 01:26:17.596342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:53784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.346 [2024-07-25 01:26:17.596348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:10.346 [2024-07-25 01:26:17.596361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:53792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.346 [2024-07-25 01:26:17.596368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:10.346 [2024-07-25 01:26:17.596380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:53800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.346 [2024-07-25 01:26:17.596387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:10.346 [2024-07-25 01:26:17.596400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:53808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.346 [2024-07-25 01:26:17.596407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:10.346 [2024-07-25 01:26:17.596538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:54072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.346 [2024-07-25 01:26:17.596547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.346 [2024-07-25 01:26:17.596562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:54080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.346 [2024-07-25 01:26:17.596570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:10.346 [2024-07-25 01:26:17.596583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:54088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.346 [2024-07-25 01:26:17.596590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:10.346 [2024-07-25 01:26:17.596604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:54096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.346 [2024-07-25 01:26:17.596611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:10.346 [2024-07-25 01:26:17.596624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:54104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.346 [2024-07-25 01:26:17.596631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:10.346 [2024-07-25 01:26:17.596644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:54112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.346 [2024-07-25 01:26:17.596651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:10.346 [2024-07-25 01:26:17.596664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:54120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.346 [2024-07-25 01:26:17.596671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:10.346 [2024-07-25 01:26:17.596686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:54128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.346 [2024-07-25 01:26:17.596693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:10.346 [2024-07-25 01:26:17.596707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:54136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.346 [2024-07-25 01:26:17.596713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:10.346 [2024-07-25 01:26:17.596727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:54144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.346 [2024-07-25 01:26:17.596735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:10.346 [2024-07-25 01:26:17.596748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:54152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.346 [2024-07-25 01:26:17.596755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:10.346 [2024-07-25 01:26:17.596768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:54160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.346 [2024-07-25 01:26:17.596774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:10.346 [2024-07-25 01:26:17.596788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:54168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.346 [2024-07-25 01:26:17.596794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:10.346 [2024-07-25 01:26:17.596808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:54176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.347 [2024-07-25 01:26:17.596815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:10.347 [2024-07-25 01:26:17.596828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:54184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.347 [2024-07-25 01:26:17.596834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:10.347 [2024-07-25 01:26:17.596848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.347 [2024-07-25 01:26:17.596854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:10.347 [2024-07-25 01:26:17.596868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:54200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.347 [2024-07-25 01:26:17.596874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:10.347 [2024-07-25 01:26:17.596889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:54208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.347 [2024-07-25 01:26:17.596895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:10.347 [2024-07-25 01:26:17.596909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:54216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.347 [2024-07-25 01:26:17.596916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:10.347 [2024-07-25 01:26:17.596930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:54224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.347 [2024-07-25 01:26:17.596937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:10.347 [2024-07-25 01:26:17.596952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:54232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.347 [2024-07-25 01:26:17.596958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:10.347 [2024-07-25 01:26:17.596972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:54240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.347 [2024-07-25 01:26:17.596978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:10.347 [2024-07-25 01:26:17.596992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:54248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.347 [2024-07-25 01:26:17.596999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:10.347 [2024-07-25 01:26:17.597012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:54256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.347 [2024-07-25 01:26:17.597019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:10.347 [2024-07-25 01:26:17.597032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.347 [2024-07-25 01:26:17.597038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:10.347 [2024-07-25 01:26:17.597057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:54272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.347 [2024-07-25 01:26:17.597064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:10.347 [2024-07-25 01:26:17.597077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:54280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.347 [2024-07-25 01:26:17.597084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:10.347 [2024-07-25 01:26:17.597097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:54288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.347 [2024-07-25 01:26:17.597104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:10.347 [2024-07-25 01:26:17.597117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:54296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.347 [2024-07-25 01:26:17.597124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:10.347 [2024-07-25 01:26:17.597137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:54304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.347 [2024-07-25 01:26:17.597144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:10.347 [2024-07-25 01:26:17.597157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:54312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.347 [2024-07-25 01:26:17.597163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:10.347 [2024-07-25 01:26:17.597176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:54320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.347 [2024-07-25 01:26:17.597184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:10.347 [2024-07-25 01:26:17.597198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:53816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.347 [2024-07-25 01:26:17.597205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.347 [2024-07-25 01:26:17.597219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:53824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.347 [2024-07-25 01:26:17.597225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:10.347 [2024-07-25 01:26:17.597239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:53832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.347 [2024-07-25 01:26:17.597246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:10.347 [2024-07-25 01:26:17.597259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:53840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.347 [2024-07-25 01:26:17.597266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:10.347 [2024-07-25 01:26:17.597279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.347 [2024-07-25 01:26:17.597286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:10.347 [2024-07-25 01:26:17.597299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:53856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.347 [2024-07-25 01:26:17.597305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:10.347 [2024-07-25 01:26:17.597319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:53864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.347 [2024-07-25 01:26:17.597326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:10.347 [2024-07-25 01:26:17.597340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:53872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.347 [2024-07-25 01:26:17.597347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:10.347 [2024-07-25 01:26:17.597433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:54328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.347 [2024-07-25 01:26:17.597442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:10.347 [2024-07-25 01:26:17.597459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:54336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.347 [2024-07-25 01:26:17.597466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:10.347 [2024-07-25 01:26:17.597482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:54344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.347 [2024-07-25 01:26:17.597489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:10.347 [2024-07-25 01:26:17.597504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:54352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.347 [2024-07-25 01:26:17.597513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:10.347 [2024-07-25 01:26:17.597528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:54360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.347 [2024-07-25 01:26:17.597535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:10.347 [2024-07-25 01:26:17.597551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.347 [2024-07-25 01:26:17.597558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:10.347 [2024-07-25 01:26:17.597574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:54376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.347 [2024-07-25 01:26:17.597580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:10.347 [2024-07-25 01:26:17.597596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:54384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.347 [2024-07-25 01:26:17.597603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:10.347 [2024-07-25 01:26:17.597619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:53880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.347 [2024-07-25 01:26:17.597625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:10.347 [2024-07-25 01:26:17.597641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:53888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.347 [2024-07-25 01:26:17.597648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:10.347 [2024-07-25 01:26:17.597664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:53896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.347 [2024-07-25 01:26:17.597671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:10.347 [2024-07-25 01:26:17.597686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:53904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.347 [2024-07-25 01:26:17.597693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:10.348 [2024-07-25 01:26:17.597709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:53912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.348 [2024-07-25 01:26:17.597716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:10.348 [2024-07-25 01:26:17.597731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:53920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.348 [2024-07-25 01:26:17.597738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:10.348 [2024-07-25 01:26:17.597753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:53928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.348 [2024-07-25 01:26:17.597760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:10.348 [2024-07-25 01:26:17.597776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:53936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.348 [2024-07-25 01:26:17.597787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:10.348 [2024-07-25 01:26:17.597851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:54392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.348 [2024-07-25 01:26:17.597860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:10.348 [2024-07-25 01:26:17.597877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:54400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.348 [2024-07-25 01:26:17.597884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:10.348 [2024-07-25 01:26:17.597901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:54408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.348 [2024-07-25 01:26:17.597908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:10.348 [2024-07-25 01:26:17.597925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:54416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.348 [2024-07-25 01:26:17.597932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:10.348 [2024-07-25 01:26:17.597948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:54424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.348 [2024-07-25 01:26:17.597955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:10.348 [2024-07-25 01:26:17.597972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:54432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.348 [2024-07-25 01:26:17.597978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:10.348 [2024-07-25 01:26:17.597995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:54440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.348 [2024-07-25 01:26:17.598002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:10.348 [2024-07-25 01:26:17.598019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:54448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.348 [2024-07-25 01:26:17.598026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:10.348 [2024-07-25 01:26:17.598048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.348 [2024-07-25 01:26:17.598055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.348 [2024-07-25 01:26:17.598072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:54464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.348 [2024-07-25 01:26:17.598079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:10.348 [2024-07-25 01:26:17.598096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:54472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.348 [2024-07-25 01:26:17.598103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:10.348 [2024-07-25 01:26:17.598120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:54480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.348 [2024-07-25 01:26:17.598126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:10.348 [2024-07-25 01:26:17.598145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.348 [2024-07-25 01:26:17.598152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:10.348 [2024-07-25 01:26:17.598168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:54496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.348 [2024-07-25 01:26:17.598175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:10.348 [2024-07-25 01:26:17.598192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:54504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.348 [2024-07-25 01:26:17.598199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:10.348 [2024-07-25 01:26:17.598216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:54512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.348 [2024-07-25 01:26:17.598222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:10.348 [2024-07-25 01:26:17.598238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:54520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.348 [2024-07-25 01:26:17.598247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:10.348 [2024-07-25 01:26:17.598264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:54528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.348 [2024-07-25 01:26:17.598270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:10.348 [2024-07-25 01:26:17.598287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:54536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.348 [2024-07-25 01:26:17.598294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:10.348 [2024-07-25 01:26:17.598311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:54544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.348 [2024-07-25 01:26:17.598317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:10.348 [2024-07-25 01:26:17.598334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.348 [2024-07-25 01:26:17.598340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:10.348 [2024-07-25 01:26:17.598357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:54560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.348 [2024-07-25 01:26:17.598364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:10.348 [2024-07-25 01:26:17.598381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:54568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.348 [2024-07-25 01:26:17.598387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:10.348 [2024-07-25 01:26:17.598404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:54576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.348 [2024-07-25 01:26:17.598410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:10.348 [2024-07-25 01:26:17.598429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:54584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.348 [2024-07-25 01:26:17.598435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:10.348 [2024-07-25 01:26:17.598453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:54592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.348 [2024-07-25 01:26:17.598460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:10.348 [2024-07-25 01:26:17.598476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:54600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.348 [2024-07-25 01:26:17.598483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:10.348 [2024-07-25 01:26:17.598500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:54608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.348 [2024-07-25 01:26:17.598506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:10.348 [2024-07-25 01:26:17.598523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.348 [2024-07-25 01:26:17.598529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:10.348 [2024-07-25 01:26:17.598546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:54624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.348 [2024-07-25 01:26:17.598552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:10.348 [2024-07-25 01:26:17.598569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:54632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.348 [2024-07-25 01:26:17.598576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:10.348 [2024-07-25 01:26:17.598593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:54640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.348 [2024-07-25 01:26:17.598599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:10.348 [2024-07-25 01:26:17.598616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:54648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.348 [2024-07-25 01:26:17.598623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:10.348 [2024-07-25 01:26:17.598640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:54656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.348 [2024-07-25 01:26:17.598647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:10.349 [2024-07-25 01:26:17.598728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:54664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.349 [2024-07-25 01:26:17.598737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:10.349 [2024-07-25 01:26:17.598758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:54672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.349 [2024-07-25 01:26:17.598765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:10.349 [2024-07-25 01:26:17.598784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:54680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.349 [2024-07-25 01:26:17.598793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:10.349 [2024-07-25 01:26:17.598812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:53944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.349 [2024-07-25 01:26:17.598820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.349 [2024-07-25 01:26:17.598839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:53952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.349 [2024-07-25 01:26:17.598846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:10.349 [2024-07-25 01:26:17.598865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:53960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.349 [2024-07-25 01:26:17.598872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:10.349 [2024-07-25 01:26:17.598891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:53968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.349 [2024-07-25 01:26:17.598898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.349 [2024-07-25 01:26:17.598924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:53976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.349 [2024-07-25 01:26:17.598932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:10.349 [2024-07-25 01:26:17.598951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:53984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.349 [2024-07-25 01:26:17.598957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:10.349 [2024-07-25 01:26:17.598976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:53992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.349 [2024-07-25 01:26:17.598983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:10.349 [2024-07-25 01:26:17.599002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:54000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.349 [2024-07-25 01:26:17.599009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:10.349 [2024-07-25 01:26:17.599028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.349 [2024-07-25 01:26:17.599035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:10.349 [2024-07-25 01:26:17.599059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:54696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.349 [2024-07-25 01:26:17.599066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:10.349 [2024-07-25 01:26:17.599085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:54704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.349 [2024-07-25 01:26:17.599092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:10.349 [2024-07-25 01:26:17.599111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:54712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.349 [2024-07-25 01:26:17.599119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:10.349 [2024-07-25 01:26:17.599139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:54720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.349 [2024-07-25 01:26:17.599145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:10.349 [2024-07-25 01:26:17.599165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:54728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.349 [2024-07-25 01:26:17.599172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:10.349 [2024-07-25 01:26:17.599191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:54736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.349 [2024-07-25 01:26:17.599197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:10.349 [2024-07-25 01:26:17.599216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:54744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.349 [2024-07-25 01:26:17.599223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:10.349 [2024-07-25 01:26:17.599243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:54752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.349 [2024-07-25 01:26:17.599249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:10.349 [2024-07-25 01:26:17.599268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:54760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.349 [2024-07-25 01:26:17.599275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:10.349 [2024-07-25 01:26:17.599295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:54768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.349 [2024-07-25 01:26:17.599301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:10.349 [2024-07-25 01:26:30.339812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.349 [2024-07-25 01:26:30.339852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:10.349 [2024-07-25 01:26:30.339873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.349 [2024-07-25 01:26:30.339882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:10.349 [2024-07-25 01:26:30.339897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.349 [2024-07-25 01:26:30.339904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:10.349 [2024-07-25 01:26:30.339918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.349 [2024-07-25 01:26:30.339925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:10.349 [2024-07-25 01:26:30.339937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.349 [2024-07-25 01:26:30.339945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:10.349 [2024-07-25 01:26:30.339962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.349 [2024-07-25 01:26:30.339969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:10.349 [2024-07-25 01:26:30.339982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.349 [2024-07-25 01:26:30.339992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:10.349 [2024-07-25 01:26:30.340006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.349 [2024-07-25 01:26:30.340014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:10.349 [2024-07-25 01:26:30.340026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.349 [2024-07-25 01:26:30.340033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:10.349 [2024-07-25 01:26:30.340051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.349 [2024-07-25 01:26:30.340059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:10.349 [2024-07-25 01:26:30.340071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.349 [2024-07-25 01:26:30.340078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:10.349 [2024-07-25 01:26:30.340091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.349 [2024-07-25 01:26:30.340099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:10.349 [2024-07-25 01:26:30.340112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.349 [2024-07-25 01:26:30.340119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:10.349 [2024-07-25 01:26:30.340132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.349 [2024-07-25 01:26:30.340139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.349 [2024-07-25 01:26:30.340151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.349 [2024-07-25 01:26:30.340158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:10.349 [2024-07-25 01:26:30.340171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.349 [2024-07-25 01:26:30.340178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:10.349 [2024-07-25 01:26:30.340190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.350 [2024-07-25 01:26:30.340197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:10.350 [2024-07-25 01:26:30.340213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.350 [2024-07-25 01:26:30.340221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:10.350 [2024-07-25 01:26:30.340236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.350 [2024-07-25 01:26:30.340243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:10.350 [2024-07-25 01:26:30.340255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.350 [2024-07-25 01:26:30.340264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:10.350 [2024-07-25 01:26:30.340277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.350 [2024-07-25 01:26:30.340284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:10.350 [2024-07-25 01:26:30.340297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.350 [2024-07-25 01:26:30.340306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:10.350 [2024-07-25 01:26:30.340319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.350 [2024-07-25 01:26:30.340327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:10.350 [2024-07-25 01:26:30.340340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.350 [2024-07-25 01:26:30.340347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:10.350 [2024-07-25 01:26:30.340359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.350 [2024-07-25 01:26:30.340366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:10.350 [2024-07-25 01:26:30.340379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.350 [2024-07-25 01:26:30.340385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:10.350 [2024-07-25 01:26:30.340398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.350 [2024-07-25 01:26:30.340404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:10.350 [2024-07-25 01:26:30.340417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.350 [2024-07-25 01:26:30.340423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:10.350 [2024-07-25 01:26:30.340435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.350 [2024-07-25 01:26:30.340442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:10.350 [2024-07-25 01:26:30.340454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.350 [2024-07-25 01:26:30.340462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:10.350 [2024-07-25 01:26:30.340475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.350 [2024-07-25 01:26:30.340481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:10.350 [2024-07-25 01:26:30.340889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.350 [2024-07-25 01:26:30.340901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:10.350 [2024-07-25 01:26:30.340916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.350 [2024-07-25 01:26:30.340924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:10.350 [2024-07-25 01:26:30.340937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.350 [2024-07-25 01:26:30.340944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:10.350 [2024-07-25 01:26:30.340957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.350 [2024-07-25 01:26:30.340963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:10.350 [2024-07-25 01:26:30.340976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.350 [2024-07-25 01:26:30.340983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:10.350 [2024-07-25 01:26:30.340995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.350 [2024-07-25 01:26:30.341002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:10.350 [2024-07-25 01:26:30.341014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.350 [2024-07-25 01:26:30.341021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:10.350 [2024-07-25 01:26:30.341035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.350 [2024-07-25 01:26:30.341048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:10.350 [2024-07-25 01:26:30.341061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.350 [2024-07-25 01:26:30.341068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:10.350 [2024-07-25 01:26:30.341080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.350 [2024-07-25 01:26:30.341087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:10.350 [2024-07-25 01:26:30.341099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.350 [2024-07-25 01:26:30.341108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:10.350 [2024-07-25 01:26:30.341121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.350 [2024-07-25 01:26:30.341127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:10.350 [2024-07-25 01:26:30.341140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.350 [2024-07-25 01:26:30.341146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:10.350 [2024-07-25 01:26:30.341159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.350 [2024-07-25 01:26:30.341165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:10.350 [2024-07-25 01:26:30.341177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.350 [2024-07-25 01:26:30.341184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.350 [2024-07-25 01:26:30.341196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.350 [2024-07-25 01:26:30.341203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:10.351 [2024-07-25 01:26:30.341676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.351 [2024-07-25 01:26:30.341690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:10.351 [2024-07-25 01:26:30.341705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.351 [2024-07-25 01:26:30.341712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:10.351 [2024-07-25 01:26:30.341727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.351 [2024-07-25 01:26:30.341733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:10.351 [2024-07-25 01:26:30.341746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:80632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.351 [2024-07-25 01:26:30.341754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:10.351 [2024-07-25 01:26:30.341767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.351 [2024-07-25 01:26:30.341774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:10.351 [2024-07-25 01:26:30.341786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.351 [2024-07-25 01:26:30.341794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:10.351 [2024-07-25 01:26:30.341806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.351 [2024-07-25 01:26:30.341814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:10.351 [2024-07-25 01:26:30.341831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:81448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.351 [2024-07-25 01:26:30.341839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:10.351 [2024-07-25 01:26:30.341851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.351 [2024-07-25 01:26:30.341858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:10.351 [2024-07-25 01:26:30.341872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:80528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.351 [2024-07-25 01:26:30.341879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:10.351 [2024-07-25 01:26:30.341891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.351 [2024-07-25 01:26:30.341900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:10.351 [2024-07-25 01:26:30.341913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.351 [2024-07-25 01:26:30.341919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:10.351 [2024-07-25 01:26:30.341931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.351 [2024-07-25 01:26:30.341938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:10.351 [2024-07-25 01:26:30.341950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:80560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.351 [2024-07-25 01:26:30.341958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:10.351 [2024-07-25 01:26:30.341970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.351 [2024-07-25 01:26:30.341977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:10.351 [2024-07-25 01:26:30.341992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:80624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.351 [2024-07-25 01:26:30.341999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:10.351 [2024-07-25 01:26:30.342011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.351 [2024-07-25 01:26:30.342019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:10.351 [2024-07-25 01:26:30.342182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:80696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.351 [2024-07-25 01:26:30.342194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:10.351 [2024-07-25 01:26:30.342209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:80728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.351 [2024-07-25 01:26:30.342216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:10.351 [2024-07-25 01:26:30.342231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:80760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.351 [2024-07-25 01:26:30.342239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:10.351 [2024-07-25 01:26:30.342251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:80792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.351 [2024-07-25 01:26:30.342258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:10.351 [2024-07-25 01:26:30.342271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.351 [2024-07-25 01:26:30.342277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:10.351 [2024-07-25 01:26:30.342290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:80856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.351 [2024-07-25 01:26:30.342297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:10.351 [2024-07-25 01:26:30.342310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.351 [2024-07-25 01:26:30.342317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:10.351 [2024-07-25 01:26:30.342329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:80920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.351 [2024-07-25 01:26:30.342336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:10.351 [2024-07-25 01:26:30.342348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:80952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.351 [2024-07-25 01:26:30.342355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:10.351 [2024-07-25 01:26:30.342368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:80984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.351 [2024-07-25 01:26:30.342375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:10.351 [2024-07-25 01:26:30.342388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:81016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.351 [2024-07-25 01:26:30.342396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:10.351 [2024-07-25 01:26:30.342409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:81056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.351 [2024-07-25 01:26:30.342416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:10.351 [2024-07-25 01:26:30.342428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.351 [2024-07-25 01:26:30.342435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:10.351 [2024-07-25 01:26:30.342447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.351 [2024-07-25 01:26:30.342453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.351 [2024-07-25 01:26:30.342466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.351 [2024-07-25 01:26:30.342474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:10.351 [2024-07-25 01:26:30.342487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.351 [2024-07-25 01:26:30.342493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:10.351 [2024-07-25 01:26:30.342505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.351 [2024-07-25 01:26:30.342512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:10.351 [2024-07-25 01:26:30.342524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.351 [2024-07-25 01:26:30.342531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:10.351 [2024-07-25 01:26:30.342544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.351 [2024-07-25 01:26:30.342550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:10.351 [2024-07-25 01:26:30.342717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.351 [2024-07-25 01:26:30.342726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:10.351 [2024-07-25 01:26:30.342739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.351 [2024-07-25 01:26:30.342746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:10.351 [2024-07-25 01:26:30.342758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.352 [2024-07-25 01:26:30.342765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:10.352 [2024-07-25 01:26:30.342777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.352 [2024-07-25 01:26:30.342784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:10.352 [2024-07-25 01:26:30.342796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.352 [2024-07-25 01:26:30.342802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:10.352 [2024-07-25 01:26:30.342815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.352 [2024-07-25 01:26:30.342821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:10.352 [2024-07-25 01:26:30.342833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.352 [2024-07-25 01:26:30.342840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:10.352 [2024-07-25 01:26:30.342852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.352 [2024-07-25 01:26:30.342862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:10.352 [2024-07-25 01:26:30.342875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:81072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.352 [2024-07-25 01:26:30.342882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:10.352 [2024-07-25 01:26:30.342894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:81104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.352 [2024-07-25 01:26:30.342901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:10.352 [2024-07-25 01:26:30.342913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:81136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.352 [2024-07-25 01:26:30.342920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:10.352 [2024-07-25 01:26:30.343319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.352 [2024-07-25 01:26:30.343334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:10.352 [2024-07-25 01:26:30.343348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.352 [2024-07-25 01:26:30.343355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:10.352 [2024-07-25 01:26:30.343370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.352 [2024-07-25 01:26:30.343377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:10.352 [2024-07-25 01:26:30.343390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.352 [2024-07-25 01:26:30.343397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:10.352 [2024-07-25 01:26:30.343411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.352 [2024-07-25 01:26:30.343419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:10.352 [2024-07-25 01:26:30.343432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.352 [2024-07-25 01:26:30.343439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:10.352 [2024-07-25 01:26:30.343451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.352 [2024-07-25 01:26:30.343458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:10.352 [2024-07-25 01:26:30.343470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.352 [2024-07-25 01:26:30.343477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:10.352 [2024-07-25 01:26:30.343490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:81184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.352 [2024-07-25 01:26:30.343496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:10.352 [2024-07-25 01:26:30.343512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:81216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.352 [2024-07-25 01:26:30.343519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:10.352 [2024-07-25 01:26:30.343532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:81248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.352 [2024-07-25 01:26:30.343538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:10.352 [2024-07-25 01:26:30.343551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:81280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.352 [2024-07-25 01:26:30.343557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:10.352 [2024-07-25 01:26:30.343570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:81312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.352 [2024-07-25 01:26:30.343577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.352 [2024-07-25 01:26:30.343589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.352 [2024-07-25 01:26:30.343596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:10.352 [2024-07-25 01:26:30.343609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:81384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.352 [2024-07-25 01:26:30.343615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:10.352 [2024-07-25 01:26:30.343628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.352 [2024-07-25 01:26:30.343635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.352 [2024-07-25 01:26:30.343647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:80632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.352 [2024-07-25 01:26:30.343654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:10.352 [2024-07-25 01:26:30.344314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.352 [2024-07-25 01:26:30.344329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:10.352 [2024-07-25 01:26:30.344344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.352 [2024-07-25 01:26:30.344352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:10.352 [2024-07-25 01:26:30.344364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:80528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.352 [2024-07-25 01:26:30.344371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:10.352 [2024-07-25 01:26:30.344385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.352 [2024-07-25 01:26:30.344392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:10.352 [2024-07-25 01:26:30.344407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:80560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.352 [2024-07-25 01:26:30.344414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:10.352 [2024-07-25 01:26:30.344426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:80624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.352 [2024-07-25 01:26:30.344434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:10.352 [2024-07-25 01:26:30.344446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:80728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.352 [2024-07-25 01:26:30.344453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:10.352 [2024-07-25 01:26:30.344466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.352 [2024-07-25 01:26:30.344472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:10.352 [2024-07-25 01:26:30.344485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.352 [2024-07-25 01:26:30.344491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:10.352 [2024-07-25 01:26:30.344504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:80920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.352 [2024-07-25 01:26:30.344511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:10.352 [2024-07-25 01:26:30.344523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.352 [2024-07-25 01:26:30.344529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:10.352 [2024-07-25 01:26:30.344542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:81056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.352 [2024-07-25 01:26:30.344548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:10.352 [2024-07-25 01:26:30.344562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.352 [2024-07-25 01:26:30.344569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:10.353 [2024-07-25 01:26:30.344581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.353 [2024-07-25 01:26:30.344588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:10.353 [2024-07-25 01:26:30.344600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.353 [2024-07-25 01:26:30.344607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:10.353 [2024-07-25 01:26:30.344620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.353 [2024-07-25 01:26:30.344626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:10.353 [2024-07-25 01:26:30.344639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:81456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.353 [2024-07-25 01:26:30.344649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:10.353 [2024-07-25 01:26:30.344662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.353 [2024-07-25 01:26:30.344669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:10.353 [2024-07-25 01:26:30.344682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:81520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.353 [2024-07-25 01:26:30.344688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:10.353 [2024-07-25 01:26:30.344701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.353 [2024-07-25 01:26:30.344707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:10.353 [2024-07-25 01:26:30.344720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.353 [2024-07-25 01:26:30.344726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:10.353 [2024-07-25 01:26:30.344739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.353 [2024-07-25 01:26:30.344745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:10.353 [2024-07-25 01:26:30.344757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.353 [2024-07-25 01:26:30.344764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:10.353 [2024-07-25 01:26:30.344777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:81104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.353 [2024-07-25 01:26:30.344784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:10.353 [2024-07-25 01:26:30.345026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.353 [2024-07-25 01:26:30.345035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:10.353 [2024-07-25 01:26:30.345055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.353 [2024-07-25 01:26:30.345062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:10.353 [2024-07-25 01:26:30.345075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.353 [2024-07-25 01:26:30.345082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:10.353 [2024-07-25 01:26:30.345094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.353 [2024-07-25 01:26:30.345101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:10.353 [2024-07-25 01:26:30.345113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:81216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.353 [2024-07-25 01:26:30.345122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:10.353 [2024-07-25 01:26:30.345135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:81280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.353 [2024-07-25 01:26:30.345141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:10.353 [2024-07-25 01:26:30.345154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:81344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.353 [2024-07-25 01:26:30.345161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.353 [2024-07-25 01:26:30.345174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.353 [2024-07-25 01:26:30.345181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:10.353 [2024-07-25 01:26:30.346342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:80688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.353 [2024-07-25 01:26:30.346361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:10.353 [2024-07-25 01:26:30.346376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.353 [2024-07-25 01:26:30.346383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:10.353 [2024-07-25 01:26:30.346398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.353 [2024-07-25 01:26:30.346405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:10.353 [2024-07-25 01:26:30.346418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.353 [2024-07-25 01:26:30.346424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:10.353 [2024-07-25 01:26:30.346437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.353 [2024-07-25 01:26:30.346443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:10.353 [2024-07-25 01:26:30.346456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.353 [2024-07-25 01:26:30.346463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:10.353 [2024-07-25 01:26:30.346475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:80848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.353 [2024-07-25 01:26:30.346482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:10.353 [2024-07-25 01:26:30.346495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:80912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.353 [2024-07-25 01:26:30.346503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:10.353 [2024-07-25 01:26:30.346515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.353 [2024-07-25 01:26:30.346522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:10.353 [2024-07-25 01:26:30.346537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:81032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.353 [2024-07-25 01:26:30.346544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:10.353 [2024-07-25 01:26:30.346556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.353 [2024-07-25 01:26:30.346563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:10.353 [2024-07-25 01:26:30.346575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:81160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.353 [2024-07-25 01:26:30.346581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:10.353 [2024-07-25 01:26:30.346594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.353 [2024-07-25 01:26:30.346600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:10.353 [2024-07-25 01:26:30.346613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.353 [2024-07-25 01:26:30.346620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:10.353 [2024-07-25 01:26:30.346632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:80624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.353 [2024-07-25 01:26:30.346638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:10.353 [2024-07-25 01:26:30.346650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:80792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.353 [2024-07-25 01:26:30.346657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:10.353 [2024-07-25 01:26:30.346669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:80920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.353 [2024-07-25 01:26:30.346676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:10.353 [2024-07-25 01:26:30.346688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:81056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.353 [2024-07-25 01:26:30.346695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:10.353 [2024-07-25 01:26:30.346709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.353 [2024-07-25 01:26:30.346716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:10.353 [2024-07-25 01:26:30.346728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.354 [2024-07-25 01:26:30.346735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:10.354 [2024-07-25 01:26:30.346747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.354 [2024-07-25 01:26:30.346754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:10.354 [2024-07-25 01:26:30.346768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.354 [2024-07-25 01:26:30.346775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:10.354 [2024-07-25 01:26:30.346787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.354 [2024-07-25 01:26:30.346794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:10.354 [2024-07-25 01:26:30.346807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:81104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.354 [2024-07-25 01:26:30.346813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:10.354 [2024-07-25 01:26:30.347582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.354 [2024-07-25 01:26:30.347598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:10.354 [2024-07-25 01:26:30.347613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.354 [2024-07-25 01:26:30.347622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:10.354 [2024-07-25 01:26:30.347637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.354 [2024-07-25 01:26:30.347644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:10.354 [2024-07-25 01:26:30.347656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.354 [2024-07-25 01:26:30.347663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:10.354 [2024-07-25 01:26:30.347677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.354 [2024-07-25 01:26:30.347684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:10.354 [2024-07-25 01:26:30.347698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.354 [2024-07-25 01:26:30.347705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:10.354 [2024-07-25 01:26:30.347720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.354 [2024-07-25 01:26:30.347728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.354 [2024-07-25 01:26:30.347742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.354 [2024-07-25 01:26:30.347748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:10.354 [2024-07-25 01:26:30.347761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:81240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.354 [2024-07-25 01:26:30.347768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:10.354 [2024-07-25 01:26:30.347783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:81304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.354 [2024-07-25 01:26:30.347792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:10.354 [2024-07-25 01:26:30.347806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:81376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.354 [2024-07-25 01:26:30.347813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:10.354 [2024-07-25 01:26:30.347826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.354 [2024-07-25 01:26:30.347832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:10.354 [2024-07-25 01:26:30.347845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.354 [2024-07-25 01:26:30.347853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:10.354 [2024-07-25 01:26:30.347866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:81280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.354 [2024-07-25 01:26:30.347874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:10.354 [2024-07-25 01:26:30.347886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.354 [2024-07-25 01:26:30.347893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:10.354 [2024-07-25 01:26:30.347906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:81432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.354 [2024-07-25 01:26:30.347913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:10.354 [2024-07-25 01:26:30.347926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.354 [2024-07-25 01:26:30.347932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:10.354 [2024-07-25 01:26:30.347944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:81464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.354 [2024-07-25 01:26:30.347951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:10.354 [2024-07-25 01:26:30.347966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:81512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.354 [2024-07-25 01:26:30.347973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:10.354 [2024-07-25 01:26:30.348163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.354 [2024-07-25 01:26:30.348173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:10.354 [2024-07-25 01:26:30.348187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:80896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.354 [2024-07-25 01:26:30.348193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:10.354 [2024-07-25 01:26:30.348206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.354 [2024-07-25 01:26:30.348215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:10.354 [2024-07-25 01:26:30.348227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.354 [2024-07-25 01:26:30.348234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:10.354 [2024-07-25 01:26:30.348247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:80784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.354 [2024-07-25 01:26:30.348253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:10.354 [2024-07-25 01:26:30.348266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:80912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.354 [2024-07-25 01:26:30.348273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:10.354 [2024-07-25 01:26:30.348285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:81032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.354 [2024-07-25 01:26:30.348292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:10.354 [2024-07-25 01:26:30.348306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:81160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.354 [2024-07-25 01:26:30.348312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:10.354 [2024-07-25 01:26:30.348325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:81496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.354 [2024-07-25 01:26:30.348331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:10.354 [2024-07-25 01:26:30.348344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:80792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.354 [2024-07-25 01:26:30.348350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:10.354 [2024-07-25 01:26:30.348362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.354 [2024-07-25 01:26:30.348369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:10.354 [2024-07-25 01:26:30.348381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.354 [2024-07-25 01:26:30.348388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:10.354 [2024-07-25 01:26:30.348401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.354 [2024-07-25 01:26:30.348407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:10.354 [2024-07-25 01:26:30.348420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:81104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.354 [2024-07-25 01:26:30.348426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:10.354 [2024-07-25 01:26:30.349171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.354 [2024-07-25 01:26:30.349190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:10.355 [2024-07-25 01:26:30.349205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.355 [2024-07-25 01:26:30.349212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:10.355 [2024-07-25 01:26:30.349224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.355 [2024-07-25 01:26:30.349231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:10.355 [2024-07-25 01:26:30.349244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:81800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.355 [2024-07-25 01:26:30.349250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:10.355 [2024-07-25 01:26:30.349262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.355 [2024-07-25 01:26:30.349269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:10.355 [2024-07-25 01:26:30.349282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:80928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.356 [2024-07-25 01:26:30.349288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.356 [2024-07-25 01:26:30.349301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.356 [2024-07-25 01:26:30.349307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:10.356 [2024-07-25 01:26:30.349320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:81192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.356 [2024-07-25 01:26:30.349326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:10.356 [2024-07-25 01:26:30.349339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:81320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.356 [2024-07-25 01:26:30.349345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:10.356 [2024-07-25 01:26:30.349359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.356 [2024-07-25 01:26:30.349365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:10.356 [2024-07-25 01:26:30.349377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.356 [2024-07-25 01:26:30.349384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:10.356 [2024-07-25 01:26:30.349397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.356 [2024-07-25 01:26:30.349403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:10.356 [2024-07-25 01:26:30.349415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.356 [2024-07-25 01:26:30.349422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:10.356 [2024-07-25 01:26:30.349437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:81304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.356 [2024-07-25 01:26:30.349444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:10.356 [2024-07-25 01:26:30.349456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.356 [2024-07-25 01:26:30.349463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:10.356 [2024-07-25 01:26:30.349475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:81280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.356 [2024-07-25 01:26:30.349482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:10.356 [2024-07-25 01:26:30.349494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:81432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.356 [2024-07-25 01:26:30.349501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:10.356 [2024-07-25 01:26:30.349514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:81464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.356 [2024-07-25 01:26:30.349520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:10.356 [2024-07-25 01:26:30.350303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.356 [2024-07-25 01:26:30.350319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:10.356 [2024-07-25 01:26:30.350334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.356 [2024-07-25 01:26:30.350341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:10.356 [2024-07-25 01:26:30.350353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.356 [2024-07-25 01:26:30.350360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:10.356 [2024-07-25 01:26:30.350372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.356 [2024-07-25 01:26:30.350379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:10.356 [2024-07-25 01:26:30.350391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.356 [2024-07-25 01:26:30.350398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:10.356 [2024-07-25 01:26:30.350410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:81536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.356 [2024-07-25 01:26:30.350417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:10.356 [2024-07-25 01:26:30.350429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:81568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.356 [2024-07-25 01:26:30.350438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:10.356 [2024-07-25 01:26:30.350454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.356 [2024-07-25 01:26:30.350461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:10.356 [2024-07-25 01:26:30.350473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.356 [2024-07-25 01:26:30.350480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:10.356 [2024-07-25 01:26:30.350492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:80784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.356 [2024-07-25 01:26:30.350499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:10.356 [2024-07-25 01:26:30.350511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:81032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.356 [2024-07-25 01:26:30.350518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:10.356 [2024-07-25 01:26:30.350530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.356 [2024-07-25 01:26:30.350536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:10.356 [2024-07-25 01:26:30.350549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:81056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.356 [2024-07-25 01:26:30.350555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:10.356 [2024-07-25 01:26:30.350567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.356 [2024-07-25 01:26:30.359404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:10.356 [2024-07-25 01:26:30.360831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:81416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.356 [2024-07-25 01:26:30.360848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:10.356 [2024-07-25 01:26:30.360862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:80864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.356 [2024-07-25 01:26:30.360869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:10.356 [2024-07-25 01:26:30.360881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.356 [2024-07-25 01:26:30.360888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.356 [2024-07-25 01:26:30.360900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.356 [2024-07-25 01:26:30.360907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:10.356 [2024-07-25 01:26:30.360919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.356 [2024-07-25 01:26:30.360926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:10.356 [2024-07-25 01:26:30.360938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:81192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.356 [2024-07-25 01:26:30.360948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.356 [2024-07-25 01:26:30.360960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.356 [2024-07-25 01:26:30.360967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:10.356 [2024-07-25 01:26:30.360979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.356 [2024-07-25 01:26:30.360985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:10.356 [2024-07-25 01:26:30.360998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:81304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.356 [2024-07-25 01:26:30.361005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:10.356 [2024-07-25 01:26:30.361017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:81280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.357 [2024-07-25 01:26:30.361024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:10.357 [2024-07-25 01:26:30.361036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:81464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.357 [2024-07-25 01:26:30.361048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:10.357 [2024-07-25 01:26:30.361061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.357 [2024-07-25 01:26:30.361068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:10.357 [2024-07-25 01:26:30.361080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.357 [2024-07-25 01:26:30.361086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:10.357 [2024-07-25 01:26:30.361098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.357 [2024-07-25 01:26:30.361105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:10.357 [2024-07-25 01:26:30.361118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:81144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.357 [2024-07-25 01:26:30.361124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:10.357 [2024-07-25 01:26:30.361136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:81608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.357 [2024-07-25 01:26:30.361143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:10.357 [2024-07-25 01:26:30.361155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:81640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.357 [2024-07-25 01:26:30.361161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:10.357 [2024-07-25 01:26:30.361174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:81672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.357 [2024-07-25 01:26:30.361182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:10.357 [2024-07-25 01:26:30.361195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.357 [2024-07-25 01:26:30.361201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:10.357 [2024-07-25 01:26:30.361213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:81352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.357 [2024-07-25 01:26:30.361220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:10.357 [2024-07-25 01:26:30.361232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:81736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.357 [2024-07-25 01:26:30.361239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:10.357 [2024-07-25 01:26:30.361251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.357 [2024-07-25 01:26:30.361258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:10.357 [2024-07-25 01:26:30.361270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.357 [2024-07-25 01:26:30.361277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:10.357 [2024-07-25 01:26:30.361289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:81536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.357 [2024-07-25 01:26:30.361295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:10.357 [2024-07-25 01:26:30.361308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:80768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.357 [2024-07-25 01:26:30.361314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:10.357 [2024-07-25 01:26:30.361327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:80784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.357 [2024-07-25 01:26:30.361333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:10.357 [2024-07-25 01:26:30.361345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.357 [2024-07-25 01:26:30.361352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:10.357 [2024-07-25 01:26:30.361365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.357 [2024-07-25 01:26:30.361371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:10.357 [2024-07-25 01:26:30.363600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:80800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.357 [2024-07-25 01:26:30.363620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:10.357 [2024-07-25 01:26:30.363635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.357 [2024-07-25 01:26:30.363642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:10.357 [2024-07-25 01:26:30.363657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.357 [2024-07-25 01:26:30.363664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:10.357 [2024-07-25 01:26:30.363677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.357 [2024-07-25 01:26:30.363683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:10.357 [2024-07-25 01:26:30.363696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.357 [2024-07-25 01:26:30.363703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:10.357 [2024-07-25 01:26:30.363715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.357 [2024-07-25 01:26:30.363722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:10.357 [2024-07-25 01:26:30.363734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.357 [2024-07-25 01:26:30.363741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:10.357 [2024-07-25 01:26:30.363753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.357 [2024-07-25 01:26:30.363760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:10.357 [2024-07-25 01:26:30.363772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:81080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.357 [2024-07-25 01:26:30.363778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:10.357 [2024-07-25 01:26:30.363791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.357 [2024-07-25 01:26:30.363797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.357 [2024-07-25 01:26:30.363809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:81792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.357 [2024-07-25 01:26:30.363816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:10.357 [2024-07-25 01:26:30.363828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:81824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.357 [2024-07-25 01:26:30.363835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:10.357 [2024-07-25 01:26:30.363847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.357 [2024-07-25 01:26:30.363854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:10.357 [2024-07-25 01:26:30.363866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.357 [2024-07-25 01:26:30.363873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:10.357 [2024-07-25 01:26:30.363886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.357 [2024-07-25 01:26:30.363894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:10.357 [2024-07-25 01:26:30.363906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.357 [2024-07-25 01:26:30.363912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:10.357 [2024-07-25 01:26:30.363925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.357 [2024-07-25 01:26:30.363931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:10.357 [2024-07-25 01:26:30.363944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.357 [2024-07-25 01:26:30.363950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:10.357 [2024-07-25 01:26:30.363962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.357 [2024-07-25 01:26:30.363969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:10.357 [2024-07-25 01:26:30.363982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.357 [2024-07-25 01:26:30.363988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:10.357 [2024-07-25 01:26:30.364373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.358 [2024-07-25 01:26:30.364386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:10.358 [2024-07-25 01:26:30.364400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:81632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.358 [2024-07-25 01:26:30.364407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:10.358 [2024-07-25 01:26:30.364419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:81696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.358 [2024-07-25 01:26:30.364426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:10.358 [2024-07-25 01:26:30.364438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.358 [2024-07-25 01:26:30.364445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:10.358 [2024-07-25 01:26:30.364457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.358 [2024-07-25 01:26:30.364464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:10.358 [2024-07-25 01:26:30.364476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:81192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.358 [2024-07-25 01:26:30.364483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:10.358 [2024-07-25 01:26:30.364495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.358 [2024-07-25 01:26:30.364504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:10.358 [2024-07-25 01:26:30.364517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:81280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.358 [2024-07-25 01:26:30.364523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:10.358 [2024-07-25 01:26:30.364535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.358 [2024-07-25 01:26:30.364542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:10.358 [2024-07-25 01:26:30.364554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.358 [2024-07-25 01:26:30.364560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:10.358 [2024-07-25 01:26:30.364572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:81608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.358 [2024-07-25 01:26:30.364579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:10.358 [2024-07-25 01:26:30.364591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:81672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.358 [2024-07-25 01:26:30.364598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:10.358 [2024-07-25 01:26:30.364610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.358 [2024-07-25 01:26:30.364617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:10.358 [2024-07-25 01:26:30.364629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.358 [2024-07-25 01:26:30.364636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:10.358 [2024-07-25 01:26:30.364648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:81536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.358 [2024-07-25 01:26:30.364655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:10.358 [2024-07-25 01:26:30.364668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:80784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.358 [2024-07-25 01:26:30.364675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:10.358 [2024-07-25 01:26:30.364687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.358 [2024-07-25 01:26:30.364693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:10.358 [2024-07-25 01:26:30.364705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.358 [2024-07-25 01:26:30.364712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:10.358 [2024-07-25 01:26:30.364724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:81848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.358 [2024-07-25 01:26:30.364732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:10.358 [2024-07-25 01:26:30.364745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.358 [2024-07-25 01:26:30.364751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:10.358 [2024-07-25 01:26:30.364763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.358 [2024-07-25 01:26:30.364770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:10.358 [2024-07-25 01:26:30.364782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.358 [2024-07-25 01:26:30.364788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.358 [2024-07-25 01:26:30.364801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:81912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.358 [2024-07-25 01:26:30.364807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:10.358 [2024-07-25 01:26:30.364819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:81752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.358 [2024-07-25 01:26:30.364826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:10.358 [2024-07-25 01:26:30.364838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:81816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.358 [2024-07-25 01:26:30.364845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:10.358 [2024-07-25 01:26:30.364857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.358 [2024-07-25 01:26:30.364864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:10.358 [2024-07-25 01:26:30.364876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.358 [2024-07-25 01:26:30.364882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:10.358 [2024-07-25 01:26:30.364895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:81712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.358 [2024-07-25 01:26:30.364901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:10.358 [2024-07-25 01:26:30.364913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:81920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.358 [2024-07-25 01:26:30.364920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:10.358 [2024-07-25 01:26:30.364932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:81952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.358 [2024-07-25 01:26:30.364938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:10.358 [2024-07-25 01:26:30.364950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.358 [2024-07-25 01:26:30.364957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:10.358 [2024-07-25 01:26:30.364974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.358 [2024-07-25 01:26:30.364981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:10.358 [2024-07-25 01:26:30.364993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.358 [2024-07-25 01:26:30.365000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:10.358 [2024-07-25 01:26:30.365012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.358 [2024-07-25 01:26:30.365019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:10.358 [2024-07-25 01:26:30.365031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.358 [2024-07-25 01:26:30.365037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:10.358 [2024-07-25 01:26:30.365597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.358 [2024-07-25 01:26:30.365611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:10.358 [2024-07-25 01:26:30.365625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.358 [2024-07-25 01:26:30.365632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:10.358 [2024-07-25 01:26:30.365644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.358 [2024-07-25 01:26:30.365651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:10.358 [2024-07-25 01:26:30.365663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.358 [2024-07-25 01:26:30.365670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:10.359 [2024-07-25 01:26:30.365682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.359 [2024-07-25 01:26:30.365688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:10.359 [2024-07-25 01:26:30.365701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.359 [2024-07-25 01:26:30.365707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:10.359 [2024-07-25 01:26:30.365720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:81544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.359 [2024-07-25 01:26:30.365726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:10.359 [2024-07-25 01:26:30.365738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.359 [2024-07-25 01:26:30.365745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:10.359 [2024-07-25 01:26:30.365759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.359 [2024-07-25 01:26:30.365766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:10.359 [2024-07-25 01:26:30.365778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:81992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.359 [2024-07-25 01:26:30.365785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:10.359 [2024-07-25 01:26:30.365797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.359 [2024-07-25 01:26:30.365804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:10.359 [2024-07-25 01:26:30.365816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:82056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.359 [2024-07-25 01:26:30.365823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:10.359 [2024-07-25 01:26:30.365835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.359 [2024-07-25 01:26:30.365842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:10.359 [2024-07-25 01:26:30.365854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.359 [2024-07-25 01:26:30.365861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:10.359 [2024-07-25 01:26:30.365873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.359 [2024-07-25 01:26:30.365880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:10.359 [2024-07-25 01:26:30.365892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.359 [2024-07-25 01:26:30.365899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:10.359 [2024-07-25 01:26:30.365911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.359 [2024-07-25 01:26:30.365917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:10.359 [2024-07-25 01:26:30.365930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.359 [2024-07-25 01:26:30.365936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:10.359 [2024-07-25 01:26:30.365948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.359 [2024-07-25 01:26:30.365955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.359 [2024-07-25 01:26:30.365967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.359 [2024-07-25 01:26:30.365973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:10.359 [2024-07-25 01:26:30.365987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:81760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.359 [2024-07-25 01:26:30.365994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:10.359 [2024-07-25 01:26:30.366006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.359 [2024-07-25 01:26:30.366012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:10.359 [2024-07-25 01:26:30.366024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.359 [2024-07-25 01:26:30.366031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:10.359 [2024-07-25 01:26:30.366049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.359 [2024-07-25 01:26:30.366056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:10.359 [2024-07-25 01:26:30.366068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.359 [2024-07-25 01:26:30.366075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:10.359 [2024-07-25 01:26:30.366087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.359 [2024-07-25 01:26:30.366094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:10.359 [2024-07-25 01:26:30.366576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.359 [2024-07-25 01:26:30.366588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:10.359 [2024-07-25 01:26:30.366602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:81768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.359 [2024-07-25 01:26:30.366609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:10.359 [2024-07-25 01:26:30.366621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:81944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.359 [2024-07-25 01:26:30.366628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:10.359 [2024-07-25 01:26:30.366641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:81632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.359 [2024-07-25 01:26:30.366647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:10.359 [2024-07-25 01:26:30.366660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:80864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.359 [2024-07-25 01:26:30.366666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:10.359 [2024-07-25 01:26:30.366679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:81192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.359 [2024-07-25 01:26:30.366685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:10.359 [2024-07-25 01:26:30.366698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:81280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.359 [2024-07-25 01:26:30.366707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:10.359 [2024-07-25 01:26:30.366719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.359 [2024-07-25 01:26:30.366726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:10.359 [2024-07-25 01:26:30.366739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:81672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.359 [2024-07-25 01:26:30.366745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:10.359 [2024-07-25 01:26:30.366757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.359 [2024-07-25 01:26:30.366764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:10.359 [2024-07-25 01:26:30.366776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.359 [2024-07-25 01:26:30.366783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:10.359 [2024-07-25 01:26:30.366795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:81728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.359 [2024-07-25 01:26:30.366802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:10.359 [2024-07-25 01:26:30.366814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.359 [2024-07-25 01:26:30.366821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:10.359 [2024-07-25 01:26:30.366833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.359 [2024-07-25 01:26:30.366840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:10.359 [2024-07-25 01:26:30.366852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:81752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.359 [2024-07-25 01:26:30.366858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:10.359 [2024-07-25 01:26:30.366871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.359 [2024-07-25 01:26:30.366877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:10.359 [2024-07-25 01:26:30.366889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:81712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.359 [2024-07-25 01:26:30.366896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:10.360 [2024-07-25 01:26:30.366908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:81952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.360 [2024-07-25 01:26:30.366916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:10.360 [2024-07-25 01:26:30.366929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.360 [2024-07-25 01:26:30.366937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:10.360 [2024-07-25 01:26:30.366950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.360 [2024-07-25 01:26:30.366957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:10.360 [2024-07-25 01:26:30.368072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.360 [2024-07-25 01:26:30.368088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:10.360 [2024-07-25 01:26:30.368102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.360 [2024-07-25 01:26:30.368109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.360 [2024-07-25 01:26:30.368122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.360 [2024-07-25 01:26:30.368129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:10.360 [2024-07-25 01:26:30.368141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.360 [2024-07-25 01:26:30.368148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:10.360 [2024-07-25 01:26:30.368160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.360 [2024-07-25 01:26:30.368167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.360 [2024-07-25 01:26:30.368180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.360 [2024-07-25 01:26:30.368187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:10.360 [2024-07-25 01:26:30.368199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.360 [2024-07-25 01:26:30.368206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:10.360 [2024-07-25 01:26:30.368218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.360 [2024-07-25 01:26:30.368225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:10.360 [2024-07-25 01:26:30.368237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.360 [2024-07-25 01:26:30.368244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:10.360 [2024-07-25 01:26:30.368256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.360 [2024-07-25 01:26:30.368263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:10.360 [2024-07-25 01:26:30.368276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.360 [2024-07-25 01:26:30.368283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:10.360 [2024-07-25 01:26:30.368298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.360 [2024-07-25 01:26:30.368304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:10.360 [2024-07-25 01:26:30.368317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.360 [2024-07-25 01:26:30.368323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:10.360 [2024-07-25 01:26:30.368336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.360 [2024-07-25 01:26:30.368342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:10.360 [2024-07-25 01:26:30.368355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.360 [2024-07-25 01:26:30.368362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:10.360 [2024-07-25 01:26:30.368374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.360 [2024-07-25 01:26:30.368381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:10.360 [2024-07-25 01:26:30.368393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.360 [2024-07-25 01:26:30.368400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:10.360 [2024-07-25 01:26:30.369144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.360 [2024-07-25 01:26:30.369158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:10.360 [2024-07-25 01:26:30.369173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:82328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.360 [2024-07-25 01:26:30.369180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:10.360 [2024-07-25 01:26:30.369193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.360 [2024-07-25 01:26:30.369200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:10.360 [2024-07-25 01:26:30.369212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:82392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.360 [2024-07-25 01:26:30.369219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:10.360 [2024-07-25 01:26:30.369231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.360 [2024-07-25 01:26:30.369238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:10.360 [2024-07-25 01:26:30.369250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:81768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.360 [2024-07-25 01:26:30.369257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:10.360 [2024-07-25 01:26:30.369273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:81632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.360 [2024-07-25 01:26:30.369280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:10.360 [2024-07-25 01:26:30.369292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:81192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.360 [2024-07-25 01:26:30.369299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:10.360 [2024-07-25 01:26:30.369311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.360 [2024-07-25 01:26:30.369318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:10.360 [2024-07-25 01:26:30.369330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.360 [2024-07-25 01:26:30.369337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:10.360 [2024-07-25 01:26:30.369349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:81728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.360 [2024-07-25 01:26:30.369356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:10.360 [2024-07-25 01:26:30.369368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.360 [2024-07-25 01:26:30.369374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:10.360 [2024-07-25 01:26:30.369387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.360 [2024-07-25 01:26:30.369393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:10.361 [2024-07-25 01:26:30.369407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:81952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.361 [2024-07-25 01:26:30.369413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:10.361 [2024-07-25 01:26:30.369426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.361 [2024-07-25 01:26:30.369433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:10.361 [2024-07-25 01:26:30.369445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.361 [2024-07-25 01:26:30.369452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:10.361 [2024-07-25 01:26:30.369947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.361 [2024-07-25 01:26:30.369959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:10.361 [2024-07-25 01:26:30.369972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.361 [2024-07-25 01:26:30.369979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:10.361 [2024-07-25 01:26:30.369992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.361 [2024-07-25 01:26:30.370001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:10.361 [2024-07-25 01:26:30.370014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.361 [2024-07-25 01:26:30.370021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.361 [2024-07-25 01:26:30.370033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.361 [2024-07-25 01:26:30.370040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:10.361 [2024-07-25 01:26:30.370057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.361 [2024-07-25 01:26:30.370064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:10.361 [2024-07-25 01:26:30.370077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.361 [2024-07-25 01:26:30.370083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:10.361 [2024-07-25 01:26:30.370096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.361 [2024-07-25 01:26:30.370102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:10.361 [2024-07-25 01:26:30.370562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.361 [2024-07-25 01:26:30.370575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:10.361 [2024-07-25 01:26:30.370588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.361 [2024-07-25 01:26:30.370595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:10.361 [2024-07-25 01:26:30.370608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.361 [2024-07-25 01:26:30.370615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:10.361 [2024-07-25 01:26:30.370627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.361 [2024-07-25 01:26:30.370634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:10.361 [2024-07-25 01:26:30.370646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.361 [2024-07-25 01:26:30.370653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:10.361 [2024-07-25 01:26:30.370665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.361 [2024-07-25 01:26:30.370672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:10.361 [2024-07-25 01:26:30.370684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.361 [2024-07-25 01:26:30.370693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:10.361 [2024-07-25 01:26:30.370706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.361 [2024-07-25 01:26:30.370712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:10.361 [2024-07-25 01:26:30.370725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.361 [2024-07-25 01:26:30.370731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:10.361 [2024-07-25 01:26:30.370743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.361 [2024-07-25 01:26:30.370750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:10.361 [2024-07-25 01:26:30.370762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.361 [2024-07-25 01:26:30.370769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:10.361 [2024-07-25 01:26:30.370781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:82056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.361 [2024-07-25 01:26:30.370787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:10.361 [2024-07-25 01:26:30.370800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.361 [2024-07-25 01:26:30.370806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:10.361 [2024-07-25 01:26:30.370818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.361 [2024-07-25 01:26:30.370825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:10.361 [2024-07-25 01:26:30.370837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.361 [2024-07-25 01:26:30.370844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:10.361 [2024-07-25 01:26:30.371431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.361 [2024-07-25 01:26:30.371444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:10.361 [2024-07-25 01:26:30.371459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.361 [2024-07-25 01:26:30.371466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:10.361 [2024-07-25 01:26:30.371478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.361 [2024-07-25 01:26:30.371485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:10.361 [2024-07-25 01:26:30.371497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.361 [2024-07-25 01:26:30.371504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:10.361 [2024-07-25 01:26:30.371518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.361 [2024-07-25 01:26:30.371525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:10.361 [2024-07-25 01:26:30.371538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.361 [2024-07-25 01:26:30.371544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:10.361 [2024-07-25 01:26:30.371557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:81928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.361 [2024-07-25 01:26:30.371564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:10.361 [2024-07-25 01:26:30.371576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.361 [2024-07-25 01:26:30.371582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:10.361 [2024-07-25 01:26:30.371595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:82304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.361 [2024-07-25 01:26:30.371601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:10.361 [2024-07-25 01:26:30.371614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.361 [2024-07-25 01:26:30.371620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:10.361 [2024-07-25 01:26:30.371633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:82328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.361 [2024-07-25 01:26:30.371639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:10.361 [2024-07-25 01:26:30.371651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:82392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.361 [2024-07-25 01:26:30.371658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:10.361 [2024-07-25 01:26:30.371671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:81768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.361 [2024-07-25 01:26:30.371677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.362 [2024-07-25 01:26:30.371690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:81192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.362 [2024-07-25 01:26:30.371696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:10.362 [2024-07-25 01:26:30.371708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.362 [2024-07-25 01:26:30.371715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:10.362 [2024-07-25 01:26:30.371727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.362 [2024-07-25 01:26:30.371734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:10.362 [2024-07-25 01:26:30.371748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:81952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.362 [2024-07-25 01:26:30.371755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:10.362 [2024-07-25 01:26:30.371767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.362 [2024-07-25 01:26:30.371774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:10.362 [2024-07-25 01:26:30.372102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.362 [2024-07-25 01:26:30.372114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:10.362 [2024-07-25 01:26:30.372129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.362 [2024-07-25 01:26:30.372136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:10.362 [2024-07-25 01:26:30.372148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:82496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.362 [2024-07-25 01:26:30.372155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:10.362 [2024-07-25 01:26:30.372167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.362 [2024-07-25 01:26:30.372173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:10.362 [2024-07-25 01:26:30.372186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:82448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.362 [2024-07-25 01:26:30.372192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:10.362 [2024-07-25 01:26:30.372205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:81984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.362 [2024-07-25 01:26:30.372212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:10.362 [2024-07-25 01:26:30.372224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.362 [2024-07-25 01:26:30.372231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:10.362 [2024-07-25 01:26:30.372243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.362 [2024-07-25 01:26:30.372250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:10.362 [2024-07-25 01:26:30.372262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.362 [2024-07-25 01:26:30.372268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:10.362 [2024-07-25 01:26:30.372281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.362 [2024-07-25 01:26:30.372287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:10.362 [2024-07-25 01:26:30.372697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.362 [2024-07-25 01:26:30.372711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:10.362 [2024-07-25 01:26:30.372724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.362 [2024-07-25 01:26:30.372731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:10.362 [2024-07-25 01:26:30.372744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.362 [2024-07-25 01:26:30.372750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:10.362 [2024-07-25 01:26:30.372763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.362 [2024-07-25 01:26:30.372769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:10.362 [2024-07-25 01:26:30.372781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.362 [2024-07-25 01:26:30.372788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:10.362 [2024-07-25 01:26:30.372800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.362 [2024-07-25 01:26:30.372807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:10.362 [2024-07-25 01:26:30.372819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.362 [2024-07-25 01:26:30.372826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:10.362 [2024-07-25 01:26:30.372838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.362 [2024-07-25 01:26:30.372845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:10.362 [2024-07-25 01:26:30.372857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.362 [2024-07-25 01:26:30.372864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:10.362 [2024-07-25 01:26:30.372876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.362 [2024-07-25 01:26:30.372882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:10.362 [2024-07-25 01:26:30.372895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.362 [2024-07-25 01:26:30.372901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:10.362 [2024-07-25 01:26:30.373015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.362 [2024-07-25 01:26:30.373024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:10.362 [2024-07-25 01:26:30.373037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.362 [2024-07-25 01:26:30.373052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:10.362 [2024-07-25 01:26:30.373064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.362 [2024-07-25 01:26:30.373071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:10.362 [2024-07-25 01:26:30.373083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.362 [2024-07-25 01:26:30.373090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:10.362 [2024-07-25 01:26:30.373103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.362 [2024-07-25 01:26:30.373109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:10.362 [2024-07-25 01:26:30.374141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:82528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.362 [2024-07-25 01:26:30.374157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.362 [2024-07-25 01:26:30.374171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:82560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.362 [2024-07-25 01:26:30.374178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:10.362 [2024-07-25 01:26:30.374191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:82592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.362 [2024-07-25 01:26:30.374198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:10.362 [2024-07-25 01:26:30.374210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.362 [2024-07-25 01:26:30.374217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:10.362 [2024-07-25 01:26:30.374230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.362 [2024-07-25 01:26:30.374236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:10.362 [2024-07-25 01:26:30.374248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.362 [2024-07-25 01:26:30.374255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:10.362 [2024-07-25 01:26:30.374267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.362 [2024-07-25 01:26:30.374274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:10.362 [2024-07-25 01:26:30.374286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.363 [2024-07-25 01:26:30.374293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:10.363 [2024-07-25 01:26:30.374305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.363 [2024-07-25 01:26:30.374312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:10.363 [2024-07-25 01:26:30.374327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:82392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.363 [2024-07-25 01:26:30.374334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:10.363 [2024-07-25 01:26:30.374346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:81192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.363 [2024-07-25 01:26:30.374353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:10.363 [2024-07-25 01:26:30.374365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.363 [2024-07-25 01:26:30.374372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:10.363 [2024-07-25 01:26:30.374384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.363 [2024-07-25 01:26:30.374391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:10.363 [2024-07-25 01:26:30.375164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:82656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.363 [2024-07-25 01:26:30.375179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:10.363 [2024-07-25 01:26:30.375193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.363 [2024-07-25 01:26:30.375200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:10.363 [2024-07-25 01:26:30.375213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.363 [2024-07-25 01:26:30.375219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:10.363 [2024-07-25 01:26:30.375232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.363 [2024-07-25 01:26:30.375239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:10.363 [2024-07-25 01:26:30.375251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:81984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.363 [2024-07-25 01:26:30.375258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:10.363 [2024-07-25 01:26:30.375270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.363 [2024-07-25 01:26:30.375277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:10.363 [2024-07-25 01:26:30.375289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.363 [2024-07-25 01:26:30.375296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:10.363 [2024-07-25 01:26:30.375309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.363 [2024-07-25 01:26:30.375315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:10.363 [2024-07-25 01:26:30.375435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.363 [2024-07-25 01:26:30.375445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:10.363 [2024-07-25 01:26:30.375459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.363 [2024-07-25 01:26:30.375466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:10.363 [2024-07-25 01:26:30.375478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.363 [2024-07-25 01:26:30.375484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:10.363 [2024-07-25 01:26:30.375497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.363 [2024-07-25 01:26:30.375504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:10.363 [2024-07-25 01:26:30.375516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.363 [2024-07-25 01:26:30.375523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:10.363 [2024-07-25 01:26:30.375915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.363 [2024-07-25 01:26:30.375928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:10.363 [2024-07-25 01:26:30.375942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.363 [2024-07-25 01:26:30.375949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:10.363 [2024-07-25 01:26:30.375961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.363 [2024-07-25 01:26:30.375968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:10.363 [2024-07-25 01:26:30.375980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.363 [2024-07-25 01:26:30.375987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.363 [2024-07-25 01:26:30.375999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.363 [2024-07-25 01:26:30.376006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:10.363 [2024-07-25 01:26:30.376018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.363 [2024-07-25 01:26:30.376025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:10.363 [2024-07-25 01:26:30.376038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.363 [2024-07-25 01:26:30.376050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.363 [2024-07-25 01:26:30.376063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.363 [2024-07-25 01:26:30.376074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:10.363 [2024-07-25 01:26:30.376087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:82744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.363 [2024-07-25 01:26:30.376094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:10.363 [2024-07-25 01:26:30.376195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.363 [2024-07-25 01:26:30.376204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:10.363 [2024-07-25 01:26:30.376217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.363 [2024-07-25 01:26:30.376224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:10.363 [2024-07-25 01:26:30.376236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.363 [2024-07-25 01:26:30.376243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:10.363 [2024-07-25 01:26:30.376256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.363 [2024-07-25 01:26:30.376262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:10.363 [2024-07-25 01:26:30.376275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:82776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.363 [2024-07-25 01:26:30.376281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:10.363 [2024-07-25 01:26:30.376579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:82560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.363 [2024-07-25 01:26:30.376589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:10.363 [2024-07-25 01:26:30.376602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:82624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.363 [2024-07-25 01:26:30.376609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:10.363 [2024-07-25 01:26:30.376621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.363 [2024-07-25 01:26:30.376628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:10.363 [2024-07-25 01:26:30.376640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.363 [2024-07-25 01:26:30.376646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:10.363 [2024-07-25 01:26:30.376659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:82392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.363 [2024-07-25 01:26:30.376666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:10.363 [2024-07-25 01:26:30.376678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.363 [2024-07-25 01:26:30.376688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:10.363 [2024-07-25 01:26:30.377265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:83048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.364 [2024-07-25 01:26:30.377280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:10.364 [2024-07-25 01:26:30.377294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.364 [2024-07-25 01:26:30.377301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:10.364 [2024-07-25 01:26:30.377313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.364 [2024-07-25 01:26:30.377320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:10.364 [2024-07-25 01:26:30.377332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.364 [2024-07-25 01:26:30.377338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:10.364 [2024-07-25 01:26:30.377351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:83112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.364 [2024-07-25 01:26:30.377357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:10.364 [2024-07-25 01:26:30.377370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:82792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.364 [2024-07-25 01:26:30.377376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:10.364 [2024-07-25 01:26:30.377388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:82552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.364 [2024-07-25 01:26:30.377395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:10.364 [2024-07-25 01:26:30.377408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:82616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.364 [2024-07-25 01:26:30.377414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:10.364 [2024-07-25 01:26:30.377427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:82832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.364 [2024-07-25 01:26:30.377433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:10.364 [2024-07-25 01:26:30.377867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:82864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.364 [2024-07-25 01:26:30.377880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:10.364 [2024-07-25 01:26:30.377895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:82680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.364 [2024-07-25 01:26:30.377901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:10.364 [2024-07-25 01:26:30.377913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.364 [2024-07-25 01:26:30.377920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:10.364 [2024-07-25 01:26:30.377935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:82888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.364 [2024-07-25 01:26:30.377942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:10.364 [2024-07-25 01:26:30.377955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.364 [2024-07-25 01:26:30.377961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:10.364 [2024-07-25 01:26:30.377974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.364 [2024-07-25 01:26:30.377980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:10.364 [2024-07-25 01:26:30.377993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.364 [2024-07-25 01:26:30.377999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:10.364 [2024-07-25 01:26:30.378012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.364 [2024-07-25 01:26:30.378018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:10.364 [2024-07-25 01:26:30.378031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.364 [2024-07-25 01:26:30.378037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:10.364 [2024-07-25 01:26:30.378056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.364 [2024-07-25 01:26:30.378063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.364 [2024-07-25 01:26:30.378178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:82920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.364 [2024-07-25 01:26:30.378188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:10.364 [2024-07-25 01:26:30.378201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:82720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.364 [2024-07-25 01:26:30.378208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:10.364 [2024-07-25 01:26:30.378220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.364 [2024-07-25 01:26:30.378226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:10.364 [2024-07-25 01:26:30.378239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.364 [2024-07-25 01:26:30.378246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:10.364 [2024-07-25 01:26:30.378648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.364 [2024-07-25 01:26:30.378658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:10.364 [2024-07-25 01:26:30.378673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.364 [2024-07-25 01:26:30.378680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:10.364 [2024-07-25 01:26:30.378693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.364 [2024-07-25 01:26:30.378700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:10.364 [2024-07-25 01:26:30.378712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.364 [2024-07-25 01:26:30.378719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:10.364 [2024-07-25 01:26:30.378780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.364 [2024-07-25 01:26:30.378789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:10.364 [2024-07-25 01:26:30.378802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.364 [2024-07-25 01:26:30.378809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:10.364 [2024-07-25 01:26:30.379234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:82624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.364 [2024-07-25 01:26:30.379248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:10.364 [2024-07-25 01:26:30.379262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.364 [2024-07-25 01:26:30.379269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:10.364 [2024-07-25 01:26:30.379282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.364 [2024-07-25 01:26:30.379288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:10.364 [2024-07-25 01:26:30.379707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:83136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.364 [2024-07-25 01:26:30.379718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:10.364 [2024-07-25 01:26:30.379731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.364 [2024-07-25 01:26:30.379738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:10.364 [2024-07-25 01:26:30.379751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.364 [2024-07-25 01:26:30.379758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:10.364 [2024-07-25 01:26:30.379892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.365 [2024-07-25 01:26:30.379901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:10.365 [2024-07-25 01:26:30.379918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:82824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.365 [2024-07-25 01:26:30.379925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:10.365 [2024-07-25 01:26:30.379937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:82928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.365 [2024-07-25 01:26:30.379944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:10.365 [2024-07-25 01:26:30.379956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:82960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.365 [2024-07-25 01:26:30.379963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:10.365 [2024-07-25 01:26:30.379976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:82992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.365 [2024-07-25 01:26:30.379982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:10.365 [2024-07-25 01:26:30.380360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:83064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.365 [2024-07-25 01:26:30.380370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:10.365 [2024-07-25 01:26:30.380453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.365 [2024-07-25 01:26:30.380463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:10.365 [2024-07-25 01:26:30.380476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:82792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.365 [2024-07-25 01:26:30.380482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:10.365 [2024-07-25 01:26:30.380495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:82616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.365 [2024-07-25 01:26:30.380502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:10.365 [2024-07-25 01:26:30.381676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:83200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.365 [2024-07-25 01:26:30.381692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:10.365 [2024-07-25 01:26:30.381706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.365 [2024-07-25 01:26:30.381713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:10.365 [2024-07-25 01:26:30.381725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.365 [2024-07-25 01:26:30.381732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:10.365 [2024-07-25 01:26:30.381744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.365 [2024-07-25 01:26:30.381750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:10.365 [2024-07-25 01:26:30.381762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:82704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.365 [2024-07-25 01:26:30.381772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:10.365 [2024-07-25 01:26:30.381784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.365 [2024-07-25 01:26:30.381790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:10.365 [2024-07-25 01:26:30.381802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:82888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.365 [2024-07-25 01:26:30.381809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.365 [2024-07-25 01:26:30.381826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.365 [2024-07-25 01:26:30.381832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:10.365 [2024-07-25 01:26:30.381844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.365 [2024-07-25 01:26:30.381850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:10.365 [2024-07-25 01:26:30.381863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.365 [2024-07-25 01:26:30.381869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:10.365 [2024-07-25 01:26:30.381881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.365 [2024-07-25 01:26:30.381888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:10.365 [2024-07-25 01:26:30.381900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.365 [2024-07-25 01:26:30.381907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:10.365 [2024-07-25 01:26:30.382405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.365 [2024-07-25 01:26:30.382418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:10.365 [2024-07-25 01:26:30.382432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:82712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.365 [2024-07-25 01:26:30.382439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:10.365 [2024-07-25 01:26:30.382452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.365 [2024-07-25 01:26:30.382458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:10.365 [2024-07-25 01:26:30.382644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.365 [2024-07-25 01:26:30.382655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:10.365 [2024-07-25 01:26:30.383179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.365 [2024-07-25 01:26:30.383196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:10.365 [2024-07-25 01:26:30.383211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.365 [2024-07-25 01:26:30.383218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:10.365 [2024-07-25 01:26:30.383231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:83088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.365 [2024-07-25 01:26:30.383237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:10.365 [2024-07-25 01:26:30.383249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.365 [2024-07-25 01:26:30.383256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:10.365 [2024-07-25 01:26:30.383269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:82632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.365 [2024-07-25 01:26:30.383275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:10.365 [2024-07-25 01:26:30.383287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.365 [2024-07-25 01:26:30.383294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:10.365 [2024-07-25 01:26:30.383306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:82840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.365 [2024-07-25 01:26:30.383313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:10.365 [2024-07-25 01:26:30.383325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:82432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.365 [2024-07-25 01:26:30.383332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:10.365 [2024-07-25 01:26:30.383344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:82968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.365 [2024-07-25 01:26:30.383351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:10.365 [2024-07-25 01:26:30.383363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.365 [2024-07-25 01:26:30.383370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:10.365 [2024-07-25 01:26:30.383466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.365 [2024-07-25 01:26:30.383475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:10.365 [2024-07-25 01:26:30.383587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:82824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.365 [2024-07-25 01:26:30.383596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:10.365 [2024-07-25 01:26:30.383610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:82960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.366 [2024-07-25 01:26:30.383616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:10.366 [2024-07-25 01:26:30.383847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:82792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.366 [2024-07-25 01:26:30.383856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:10.366 [2024-07-25 01:26:30.384321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:83280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.366 [2024-07-25 01:26:30.384335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:10.366 [2024-07-25 01:26:30.384349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.366 [2024-07-25 01:26:30.384356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:10.366 [2024-07-25 01:26:30.384369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:83312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.366 [2024-07-25 01:26:30.384375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:10.366 [2024-07-25 01:26:30.384387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.366 [2024-07-25 01:26:30.384394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:10.366 [2024-07-25 01:26:30.384406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.366 [2024-07-25 01:26:30.384413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:10.366 [2024-07-25 01:26:30.384425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.366 [2024-07-25 01:26:30.384432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:10.366 [2024-07-25 01:26:30.384445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.366 [2024-07-25 01:26:30.384451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:10.366 [2024-07-25 01:26:30.384464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.366 [2024-07-25 01:26:30.384470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:10.366 [2024-07-25 01:26:30.384483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:83408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.366 [2024-07-25 01:26:30.384490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.366 [2024-07-25 01:26:30.384741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:83424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.366 [2024-07-25 01:26:30.384753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:10.366 [2024-07-25 01:26:30.384767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:82736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.366 [2024-07-25 01:26:30.384773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:10.366 [2024-07-25 01:26:30.384789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.366 [2024-07-25 01:26:30.384796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:10.366 [2024-07-25 01:26:30.384808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:83176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.366 [2024-07-25 01:26:30.384815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:10.366 [2024-07-25 01:26:30.384827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.366 [2024-07-25 01:26:30.384834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:10.366 [2024-07-25 01:26:30.384846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:83040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.366 [2024-07-25 01:26:30.384853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:10.366 [2024-07-25 01:26:30.384865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.366 [2024-07-25 01:26:30.384872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:10.366 [2024-07-25 01:26:30.384884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.366 [2024-07-25 01:26:30.384890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:10.366 [2024-07-25 01:26:30.384902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:82488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.366 [2024-07-25 01:26:30.384909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:10.366 [2024-07-25 01:26:30.384922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:82536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.366 [2024-07-25 01:26:30.384928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:10.366 [2024-07-25 01:26:30.385216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:83440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.366 [2024-07-25 01:26:30.385229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:10.366 [2024-07-25 01:26:30.385242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:83456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.366 [2024-07-25 01:26:30.385249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:10.366 [2024-07-25 01:26:30.385261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.366 [2024-07-25 01:26:30.385268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:10.366 [2024-07-25 01:26:30.385280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.366 [2024-07-25 01:26:30.385287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:10.366 [2024-07-25 01:26:30.385346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.366 [2024-07-25 01:26:30.385357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:10.366 [2024-07-25 01:26:30.385371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:82712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.366 [2024-07-25 01:26:30.385377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:10.366 [2024-07-25 01:26:30.385390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:83080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.366 [2024-07-25 01:26:30.385396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:10.366 [2024-07-25 01:26:30.385409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.366 [2024-07-25 01:26:30.385416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:10.366 [2024-07-25 01:26:30.385984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:82568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.366 [2024-07-25 01:26:30.385996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:10.366 [2024-07-25 01:26:30.386010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:82952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.366 [2024-07-25 01:26:30.386017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:10.366 [2024-07-25 01:26:30.386030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:83512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.366 [2024-07-25 01:26:30.386036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:10.366 [2024-07-25 01:26:30.386054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:83528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.366 [2024-07-25 01:26:30.386061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:10.366 [2024-07-25 01:26:30.386074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:83544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.366 [2024-07-25 01:26:30.386081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:10.366 [2024-07-25 01:26:30.386093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.366 [2024-07-25 01:26:30.386100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:10.366 [2024-07-25 01:26:30.386112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.366 [2024-07-25 01:26:30.386118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:10.366 [2024-07-25 01:26:30.386130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.366 [2024-07-25 01:26:30.386137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:10.366 [2024-07-25 01:26:30.386150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.366 [2024-07-25 01:26:30.386159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:10.366 [2024-07-25 01:26:30.386172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.366 [2024-07-25 01:26:30.386178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:10.367 [2024-07-25 01:26:30.386191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.367 [2024-07-25 01:26:30.386198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:10.367 [2024-07-25 01:26:30.386210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.367 [2024-07-25 01:26:30.386216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:10.367 [2024-07-25 01:26:30.386229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:82432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.367 [2024-07-25 01:26:30.386235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:10.367 [2024-07-25 01:26:30.386248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:83032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.367 [2024-07-25 01:26:30.386255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.367 [2024-07-25 01:26:30.386267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:82960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.367 [2024-07-25 01:26:30.386274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:10.367 [2024-07-25 01:26:30.386944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.367 [2024-07-25 01:26:30.386959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:10.367 [2024-07-25 01:26:30.386975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.367 [2024-07-25 01:26:30.386982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:10.367 [2024-07-25 01:26:30.386995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.367 [2024-07-25 01:26:30.387001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:10.367 [2024-07-25 01:26:30.387014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.367 [2024-07-25 01:26:30.387020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:10.367 [2024-07-25 01:26:30.387033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.367 [2024-07-25 01:26:30.387039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:10.367 [2024-07-25 01:26:30.387059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:83096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.367 [2024-07-25 01:26:30.387066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:10.367 [2024-07-25 01:26:30.387141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:83296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.367 [2024-07-25 01:26:30.387151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:10.367 [2024-07-25 01:26:30.387163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.367 [2024-07-25 01:26:30.387170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:10.367 [2024-07-25 01:26:30.387183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.367 [2024-07-25 01:26:30.387189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:10.367 [2024-07-25 01:26:30.387202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.367 [2024-07-25 01:26:30.387209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:10.367 [2024-07-25 01:26:30.387374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.367 [2024-07-25 01:26:30.387385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:10.367 [2024-07-25 01:26:30.387399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:83320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.367 [2024-07-25 01:26:30.387406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:10.367 [2024-07-25 01:26:30.387418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.367 [2024-07-25 01:26:30.387425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:10.367 [2024-07-25 01:26:30.387438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:83384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.367 [2024-07-25 01:26:30.387444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:10.367 [2024-07-25 01:26:30.387539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:82736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.367 [2024-07-25 01:26:30.387548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:10.367 [2024-07-25 01:26:30.387561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:83176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.367 [2024-07-25 01:26:30.387568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:10.367 [2024-07-25 01:26:30.387580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:83040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.367 [2024-07-25 01:26:30.387587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:10.367 [2024-07-25 01:26:30.387599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.367 [2024-07-25 01:26:30.387606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:10.367 [2024-07-25 01:26:30.387621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.367 [2024-07-25 01:26:30.387628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:10.367 [2024-07-25 01:26:30.387688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.367 [2024-07-25 01:26:30.387697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:10.367 [2024-07-25 01:26:30.388254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.367 [2024-07-25 01:26:30.388268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:10.367 [2024-07-25 01:26:30.388282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.367 [2024-07-25 01:26:30.388288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:10.367 [2024-07-25 01:26:30.388301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.367 [2024-07-25 01:26:30.388308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:10.367 [2024-07-25 01:26:30.388321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.367 [2024-07-25 01:26:30.388327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:10.367 [2024-07-25 01:26:30.388844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.367 [2024-07-25 01:26:30.388856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:10.367 [2024-07-25 01:26:30.388870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.367 [2024-07-25 01:26:30.388877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:10.367 [2024-07-25 01:26:30.388890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.367 [2024-07-25 01:26:30.388896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:10.367 [2024-07-25 01:26:30.388909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:83688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.367 [2024-07-25 01:26:30.388915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:10.367 [2024-07-25 01:26:30.388928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.367 [2024-07-25 01:26:30.388935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:10.367 [2024-07-25 01:26:30.388947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:83720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.367 [2024-07-25 01:26:30.388954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:10.367 [2024-07-25 01:26:30.388966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.367 [2024-07-25 01:26:30.388976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.367 [2024-07-25 01:26:30.389193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:82952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.367 [2024-07-25 01:26:30.389204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:10.367 [2024-07-25 01:26:30.389218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:83528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.367 [2024-07-25 01:26:30.389224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:10.367 [2024-07-25 01:26:30.389912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.367 [2024-07-25 01:26:30.389925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:10.368 [2024-07-25 01:26:30.389939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:82880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.368 [2024-07-25 01:26:30.389946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:10.368 [2024-07-25 01:26:30.389958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.368 [2024-07-25 01:26:30.389965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:10.368 [2024-07-25 01:26:30.389978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.368 [2024-07-25 01:26:30.389984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:10.368 [2024-07-25 01:26:30.389997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.368 [2024-07-25 01:26:30.390003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:10.368 [2024-07-25 01:26:30.390016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:83752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.368 [2024-07-25 01:26:30.390022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:10.368 [2024-07-25 01:26:30.390035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:83432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.368 [2024-07-25 01:26:30.390041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:10.368 [2024-07-25 01:26:30.390061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:83464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.368 [2024-07-25 01:26:30.390068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:10.368 [2024-07-25 01:26:30.390081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:83496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.368 [2024-07-25 01:26:30.390087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:10.368 [2024-07-25 01:26:30.390100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.368 [2024-07-25 01:26:30.390109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:10.368 [2024-07-25 01:26:30.390122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.368 [2024-07-25 01:26:30.390128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:10.368 [2024-07-25 01:26:30.390141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:83568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.368 [2024-07-25 01:26:30.390147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:10.368 [2024-07-25 01:26:30.390161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.368 [2024-07-25 01:26:30.390168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:10.368 [2024-07-25 01:26:30.390180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.368 [2024-07-25 01:26:30.390187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:10.368 [2024-07-25 01:26:30.390199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:83584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.368 [2024-07-25 01:26:30.390206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:10.368 [2024-07-25 01:26:30.390353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:83600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.368 [2024-07-25 01:26:30.390362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:10.368 [2024-07-25 01:26:30.390375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.368 [2024-07-25 01:26:30.390382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:10.368 [2024-07-25 01:26:30.390394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:83096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.368 [2024-07-25 01:26:30.390401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:10.368 [2024-07-25 01:26:30.390414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.368 [2024-07-25 01:26:30.390420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:10.368 [2024-07-25 01:26:30.390432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.368 [2024-07-25 01:26:30.390439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:10.368 [2024-07-25 01:26:30.390451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:83592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.368 [2024-07-25 01:26:30.390458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:10.368 [2024-07-25 01:26:30.390470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.368 [2024-07-25 01:26:30.390477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:10.368 [2024-07-25 01:26:30.390493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:83320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.368 [2024-07-25 01:26:30.390500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:10.368 [2024-07-25 01:26:30.390512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.368 [2024-07-25 01:26:30.390519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:10.368 [2024-07-25 01:26:30.390531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:83176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.368 [2024-07-25 01:26:30.390538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:10.368 [2024-07-25 01:26:30.390551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.368 [2024-07-25 01:26:30.390557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:10.368 [2024-07-25 01:26:30.391139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.368 [2024-07-25 01:26:30.391152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:10.368 [2024-07-25 01:26:30.391167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.368 [2024-07-25 01:26:30.391173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:10.368 [2024-07-25 01:26:30.391186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.368 [2024-07-25 01:26:30.391193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:10.368 [2024-07-25 01:26:30.391205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.368 [2024-07-25 01:26:30.391211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.368 [2024-07-25 01:26:30.391223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.368 [2024-07-25 01:26:30.391230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:10.368 [2024-07-25 01:26:30.391243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:83376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.368 [2024-07-25 01:26:30.391249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:10.368 [2024-07-25 01:26:30.391380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.368 [2024-07-25 01:26:30.391389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:10.368 [2024-07-25 01:26:30.391403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.368 [2024-07-25 01:26:30.391410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:10.368 [2024-07-25 01:26:30.391471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.368 [2024-07-25 01:26:30.391480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:10.368 [2024-07-25 01:26:30.391494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.368 [2024-07-25 01:26:30.391501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:10.368 [2024-07-25 01:26:30.391932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:83656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.368 [2024-07-25 01:26:30.391944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:10.368 [2024-07-25 01:26:30.391958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:83688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.368 [2024-07-25 01:26:30.391964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:10.368 [2024-07-25 01:26:30.392113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.368 [2024-07-25 01:26:30.392125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:10.368 [2024-07-25 01:26:30.392149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:83632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.369 [2024-07-25 01:26:30.392156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:10.369 [2024-07-25 01:26:30.392169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:83664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.369 [2024-07-25 01:26:30.392175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:10.369 [2024-07-25 01:26:30.392188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.369 [2024-07-25 01:26:30.392194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:10.369 [2024-07-25 01:26:30.392207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:83728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.369 [2024-07-25 01:26:30.392214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:10.369 [2024-07-25 01:26:30.392271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:83528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.369 [2024-07-25 01:26:30.392280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:10.369 [2024-07-25 01:26:30.392742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.369 [2024-07-25 01:26:30.392754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:10.369 [2024-07-25 01:26:30.392767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.369 [2024-07-25 01:26:30.392774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:10.369 [2024-07-25 01:26:30.392786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.369 [2024-07-25 01:26:30.392796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:10.369 [2024-07-25 01:26:30.392809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.369 [2024-07-25 01:26:30.392815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:10.369 [2024-07-25 01:26:30.392828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.369 [2024-07-25 01:26:30.392834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:10.369 [2024-07-25 01:26:30.392847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.369 [2024-07-25 01:26:30.392853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:10.369 [2024-07-25 01:26:30.392866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.369 [2024-07-25 01:26:30.392872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:10.369 [2024-07-25 01:26:30.393675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.369 [2024-07-25 01:26:30.393690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:10.369 [2024-07-25 01:26:30.393705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.369 [2024-07-25 01:26:30.393711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:10.369 [2024-07-25 01:26:30.393724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.369 [2024-07-25 01:26:30.393731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:10.369 [2024-07-25 01:26:30.393743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:82880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.369 [2024-07-25 01:26:30.393750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:10.369 [2024-07-25 01:26:30.393762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.369 [2024-07-25 01:26:30.393769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:10.369 [2024-07-25 01:26:30.393781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.369 [2024-07-25 01:26:30.393787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:10.369 [2024-07-25 01:26:30.393800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:83464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.369 [2024-07-25 01:26:30.393806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:10.369 [2024-07-25 01:26:30.393819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.369 [2024-07-25 01:26:30.393828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:10.369 [2024-07-25 01:26:30.393841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:83568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.369 [2024-07-25 01:26:30.393848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:10.369 [2024-07-25 01:26:30.393860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.369 [2024-07-25 01:26:30.393867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:10.369 [2024-07-25 01:26:30.393878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.369 [2024-07-25 01:26:30.393885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.369 [2024-07-25 01:26:30.393898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.369 [2024-07-25 01:26:30.393904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:10.369 [2024-07-25 01:26:30.393917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.369 [2024-07-25 01:26:30.393923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:10.369 [2024-07-25 01:26:30.393935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:83320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.369 [2024-07-25 01:26:30.393942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:10.369 [2024-07-25 01:26:30.393954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.369 [2024-07-25 01:26:30.393961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:10.369 [2024-07-25 01:26:30.393973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:83512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.369 [2024-07-25 01:26:30.393979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:10.369 [2024-07-25 01:26:30.393992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:83576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.369 [2024-07-25 01:26:30.393998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:10.369 [2024-07-25 01:26:30.394010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.369 [2024-07-25 01:26:30.394017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:10.369 [2024-07-25 01:26:30.394029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.369 [2024-07-25 01:26:30.394036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:10.369 [2024-07-25 01:26:30.394054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.369 [2024-07-25 01:26:30.394063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:10.369 [2024-07-25 01:26:30.394075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.369 [2024-07-25 01:26:30.394082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:10.369 [2024-07-25 01:26:30.394094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.370 [2024-07-25 01:26:30.394101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:10.370 [2024-07-25 01:26:30.394114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.370 [2024-07-25 01:26:30.394120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:10.370 [2024-07-25 01:26:30.394828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.370 [2024-07-25 01:26:30.394841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:10.370 [2024-07-25 01:26:30.394856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:10.370 [2024-07-25 01:26:30.394862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:10.370 [2024-07-25 01:26:30.394875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:83376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.370 [2024-07-25 01:26:30.394881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:10.370 [2024-07-25 01:26:30.394894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.370 [2024-07-25 01:26:30.394901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:10.370 [2024-07-25 01:26:30.394913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.370 [2024-07-25 01:26:30.394920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:10.370 [2024-07-25 01:26:30.394932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.370 [2024-07-25 01:26:30.394938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:10.370 [2024-07-25 01:26:30.394951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.370 [2024-07-25 01:26:30.394958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:10.370 Received shutdown signal, test time was about 27.011709 seconds 00:26:10.370 00:26:10.370 Latency(us) 00:26:10.370 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:10.370 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:10.370 Verification LBA range: start 0x0 length 0x4000 00:26:10.370 Nvme0n1 : 27.01 10428.80 40.74 0.00 0.00 12240.69 933.18 3019898.88 00:26:10.370 =================================================================================================================== 00:26:10.370 Total : 10428.80 40.74 0.00 0.00 12240.69 933.18 3019898.88 00:26:10.370 01:26:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:10.630 01:26:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:10.630 01:26:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:10.630 01:26:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:10.630 01:26:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:10.630 01:26:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:26:10.630 01:26:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:10.630 01:26:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:26:10.630 01:26:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:10.630 01:26:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:10.630 rmmod nvme_tcp 00:26:10.630 rmmod nvme_fabrics 00:26:10.630 rmmod nvme_keyring 00:26:10.630 01:26:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:10.630 01:26:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:26:10.630 01:26:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:26:10.630 01:26:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1008676 ']' 00:26:10.630 01:26:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1008676 00:26:10.630 01:26:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1008676 ']' 00:26:10.630 01:26:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1008676 00:26:10.630 01:26:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:26:10.630 01:26:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:10.630 01:26:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1008676 00:26:10.630 01:26:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:10.630 01:26:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:10.630 01:26:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1008676' 00:26:10.630 killing process with pid 1008676 00:26:10.630 01:26:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1008676 00:26:10.630 01:26:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1008676 00:26:10.891 01:26:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:10.891 01:26:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:10.891 01:26:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:10.891 01:26:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:10.891 01:26:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:10.891 01:26:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:10.891 01:26:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:10.891 01:26:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:13.429 01:26:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:13.429 00:26:13.429 real 0m38.926s 00:26:13.429 user 1m45.745s 00:26:13.429 sys 0m10.350s 00:26:13.429 01:26:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:13.429 01:26:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:13.429 ************************************ 00:26:13.429 END TEST nvmf_host_multipath_status 00:26:13.429 ************************************ 00:26:13.429 01:26:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:13.429 01:26:35 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:13.429 01:26:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:13.429 01:26:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:13.429 01:26:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:13.429 ************************************ 00:26:13.429 START TEST nvmf_discovery_remove_ifc 00:26:13.429 ************************************ 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:13.429 * Looking for test storage... 00:26:13.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:26:13.429 01:26:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:18.707 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:18.707 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:26:18.707 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:18.707 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:18.707 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:18.707 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:18.707 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:18.707 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:26:18.707 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:18.707 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:26:18.707 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:26:18.707 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:26:18.707 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:26:18.707 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:26:18.707 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:26:18.707 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:18.707 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:18.707 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:18.707 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:18.707 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:18.707 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:18.707 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:18.707 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:18.707 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:18.707 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:18.707 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:18.707 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:18.707 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:18.707 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:18.707 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:18.707 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:18.707 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:18.707 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:18.707 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:18.707 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:18.707 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:18.707 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:18.707 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.707 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.707 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:18.708 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:18.708 Found net devices under 0000:86:00.0: cvl_0_0 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:18.708 Found net devices under 0000:86:00.1: cvl_0_1 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:18.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:18.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:26:18.708 00:26:18.708 --- 10.0.0.2 ping statistics --- 00:26:18.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.708 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:18.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:18.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.427 ms 00:26:18.708 00:26:18.708 --- 10.0.0.1 ping statistics --- 00:26:18.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.708 rtt min/avg/max/mdev = 0.427/0.427/0.427/0.000 ms 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1017473 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1017473 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1017473 ']' 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:18.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:18.708 01:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:18.708 [2024-07-25 01:26:40.910072] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:26:18.708 [2024-07-25 01:26:40.910119] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:18.708 EAL: No free 2048 kB hugepages reported on node 1 00:26:18.708 [2024-07-25 01:26:40.967377] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.708 [2024-07-25 01:26:41.047893] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:18.708 [2024-07-25 01:26:41.047930] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:18.708 [2024-07-25 01:26:41.047938] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:18.708 [2024-07-25 01:26:41.047944] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:18.708 [2024-07-25 01:26:41.047949] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:18.708 [2024-07-25 01:26:41.047966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:19.278 01:26:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:19.278 01:26:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:26:19.278 01:26:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:19.278 01:26:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:19.278 01:26:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:19.278 01:26:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:19.278 01:26:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:19.278 01:26:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.278 01:26:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:19.278 [2024-07-25 01:26:41.766873] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:19.537 [2024-07-25 01:26:41.774989] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:19.538 null0 00:26:19.538 [2024-07-25 01:26:41.807007] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:19.538 01:26:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.538 01:26:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1017618 00:26:19.538 01:26:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:19.538 01:26:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1017618 /tmp/host.sock 00:26:19.538 01:26:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1017618 ']' 00:26:19.538 01:26:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:26:19.538 01:26:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:19.538 01:26:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:19.538 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:19.538 01:26:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:19.538 01:26:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:19.538 [2024-07-25 01:26:41.873643] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:26:19.538 [2024-07-25 01:26:41.873685] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1017618 ] 00:26:19.538 EAL: No free 2048 kB hugepages reported on node 1 00:26:19.538 [2024-07-25 01:26:41.927010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.538 [2024-07-25 01:26:42.006717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:20.478 01:26:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:20.478 01:26:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:26:20.478 01:26:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:20.478 01:26:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:20.478 01:26:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.478 01:26:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:20.478 01:26:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.478 01:26:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:20.478 01:26:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.478 01:26:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:20.478 01:26:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.478 01:26:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:20.478 01:26:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.478 01:26:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:21.414 [2024-07-25 01:26:43.818053] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:21.414 [2024-07-25 01:26:43.818089] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:21.414 [2024-07-25 01:26:43.818104] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:21.673 [2024-07-25 01:26:43.946525] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:21.934 [2024-07-25 01:26:44.170293] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:21.934 [2024-07-25 01:26:44.170337] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:21.934 [2024-07-25 01:26:44.170357] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:21.934 [2024-07-25 01:26:44.170369] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:21.934 [2024-07-25 01:26:44.170386] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:21.934 01:26:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.934 01:26:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:21.934 01:26:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:21.934 01:26:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:21.934 01:26:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:21.934 01:26:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.934 01:26:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:21.934 01:26:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:21.934 [2024-07-25 01:26:44.176829] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1002e60 was disconnected and freed. delete nvme_qpair. 00:26:21.934 01:26:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:21.934 01:26:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.934 01:26:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:21.934 01:26:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:21.934 01:26:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:21.934 01:26:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:21.934 01:26:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:21.934 01:26:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:21.934 01:26:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:21.934 01:26:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.934 01:26:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:21.934 01:26:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:21.934 01:26:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:21.934 01:26:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.934 01:26:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:21.934 01:26:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:22.888 01:26:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:22.888 01:26:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:22.888 01:26:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:22.888 01:26:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.888 01:26:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:22.888 01:26:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:22.888 01:26:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:23.149 01:26:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.149 01:26:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:23.149 01:26:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:24.089 01:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:24.089 01:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:24.089 01:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:24.089 01:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:24.089 01:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.089 01:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:24.089 01:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:24.089 01:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.089 01:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:24.089 01:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:25.050 01:26:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:25.050 01:26:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:25.050 01:26:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:25.050 01:26:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.050 01:26:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:25.050 01:26:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:25.050 01:26:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:25.050 01:26:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.050 01:26:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:25.050 01:26:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:26.431 01:26:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:26.431 01:26:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:26.431 01:26:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:26.431 01:26:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:26.431 01:26:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.431 01:26:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:26.431 01:26:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:26.431 01:26:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.431 01:26:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:26.431 01:26:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:27.371 01:26:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:27.371 01:26:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:27.371 01:26:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:27.371 01:26:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.371 01:26:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:27.371 01:26:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:27.371 01:26:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:27.371 01:26:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.371 [2024-07-25 01:26:49.611427] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:27.371 [2024-07-25 01:26:49.611464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:27.371 [2024-07-25 01:26:49.611474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.371 [2024-07-25 01:26:49.611482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:27.371 [2024-07-25 01:26:49.611489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.371 [2024-07-25 01:26:49.611496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:27.371 [2024-07-25 01:26:49.611502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.372 [2024-07-25 01:26:49.611509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:27.372 [2024-07-25 01:26:49.611515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.372 [2024-07-25 01:26:49.611523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:27.372 [2024-07-25 01:26:49.611530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.372 [2024-07-25 01:26:49.611536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc96a0 is same with the state(5) to be set 00:26:27.372 [2024-07-25 01:26:49.621449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc96a0 (9): Bad file descriptor 00:26:27.372 01:26:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:27.372 01:26:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:27.372 [2024-07-25 01:26:49.631488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:28.311 01:26:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:28.311 01:26:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:28.311 01:26:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:28.311 01:26:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:28.311 01:26:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.311 01:26:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:28.311 01:26:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:28.311 [2024-07-25 01:26:50.690114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:28.311 [2024-07-25 01:26:50.690165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc96a0 with addr=10.0.0.2, port=4420 00:26:28.311 [2024-07-25 01:26:50.690188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc96a0 is same with the state(5) to be set 00:26:28.311 [2024-07-25 01:26:50.690220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc96a0 (9): Bad file descriptor 00:26:28.311 [2024-07-25 01:26:50.690648] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:28.311 [2024-07-25 01:26:50.690671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:28.311 [2024-07-25 01:26:50.690680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:28.311 [2024-07-25 01:26:50.690690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:28.311 [2024-07-25 01:26:50.690711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.311 [2024-07-25 01:26:50.690721] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:28.311 01:26:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.311 01:26:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:28.311 01:26:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:29.252 [2024-07-25 01:26:51.693201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:29.252 [2024-07-25 01:26:51.693225] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:29.252 [2024-07-25 01:26:51.693233] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:29.252 [2024-07-25 01:26:51.693240] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:26:29.252 [2024-07-25 01:26:51.693253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.252 [2024-07-25 01:26:51.693272] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:29.252 [2024-07-25 01:26:51.693293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.252 [2024-07-25 01:26:51.693303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.252 [2024-07-25 01:26:51.693314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.252 [2024-07-25 01:26:51.693321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.252 [2024-07-25 01:26:51.693328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.252 [2024-07-25 01:26:51.693336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.252 [2024-07-25 01:26:51.693343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.252 [2024-07-25 01:26:51.693350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.252 [2024-07-25 01:26:51.693359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.252 [2024-07-25 01:26:51.693367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.252 [2024-07-25 01:26:51.693375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:29.252 [2024-07-25 01:26:51.693456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc8a80 (9): Bad file descriptor 00:26:29.252 [2024-07-25 01:26:51.694467] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:29.252 [2024-07-25 01:26:51.694478] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:29.252 01:26:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:29.252 01:26:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:29.252 01:26:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:29.252 01:26:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.252 01:26:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:29.252 01:26:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:29.252 01:26:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:29.252 01:26:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.252 01:26:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:29.252 01:26:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:29.513 01:26:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:29.513 01:26:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:29.513 01:26:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:29.513 01:26:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:29.513 01:26:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:29.513 01:26:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.513 01:26:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:29.513 01:26:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:29.513 01:26:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:29.513 01:26:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.513 01:26:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:29.513 01:26:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:30.454 01:26:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:30.454 01:26:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:30.454 01:26:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:30.454 01:26:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:30.454 01:26:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.454 01:26:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:30.454 01:26:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:30.454 01:26:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.454 01:26:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:30.454 01:26:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:31.394 [2024-07-25 01:26:53.753251] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:31.394 [2024-07-25 01:26:53.753271] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:31.394 [2024-07-25 01:26:53.753286] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:31.394 [2024-07-25 01:26:53.883687] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:31.653 01:26:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:31.653 01:26:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:31.653 01:26:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:31.653 01:26:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.653 01:26:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:31.653 01:26:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:31.653 01:26:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:31.653 01:26:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.653 01:26:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:31.653 01:26:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:31.653 [2024-07-25 01:26:54.105924] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:31.653 [2024-07-25 01:26:54.105960] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:31.653 [2024-07-25 01:26:54.105978] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:31.653 [2024-07-25 01:26:54.105991] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:31.653 [2024-07-25 01:26:54.105997] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:31.653 [2024-07-25 01:26:54.112410] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xfdfa70 was disconnected and freed. delete nvme_qpair. 00:26:32.592 01:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:32.592 01:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:32.592 01:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:32.592 01:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:32.592 01:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.592 01:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:32.592 01:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:32.592 01:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.592 01:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:32.592 01:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:32.592 01:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1017618 00:26:32.592 01:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1017618 ']' 00:26:32.592 01:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1017618 00:26:32.592 01:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:26:32.592 01:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:32.592 01:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1017618 00:26:32.592 01:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:32.592 01:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:32.592 01:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1017618' 00:26:32.592 killing process with pid 1017618 00:26:32.592 01:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1017618 00:26:32.592 01:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1017618 00:26:32.852 01:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:32.852 01:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:32.852 01:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:26:32.852 01:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:32.852 01:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:26:32.852 01:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:32.852 01:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:32.852 rmmod nvme_tcp 00:26:32.852 rmmod nvme_fabrics 00:26:32.852 rmmod nvme_keyring 00:26:32.852 01:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:32.852 01:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:26:32.852 01:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:26:32.852 01:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1017473 ']' 00:26:32.852 01:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1017473 00:26:32.852 01:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1017473 ']' 00:26:32.852 01:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1017473 00:26:32.852 01:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:26:32.852 01:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:32.852 01:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1017473 00:26:33.112 01:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:33.112 01:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:33.112 01:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1017473' 00:26:33.112 killing process with pid 1017473 00:26:33.112 01:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1017473 00:26:33.112 01:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1017473 00:26:33.112 01:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:33.112 01:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:33.112 01:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:33.112 01:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:33.112 01:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:33.112 01:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.112 01:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:33.112 01:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:35.653 01:26:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:35.653 00:26:35.653 real 0m22.220s 00:26:35.653 user 0m28.980s 00:26:35.653 sys 0m5.318s 00:26:35.653 01:26:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:35.653 01:26:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:35.653 ************************************ 00:26:35.653 END TEST nvmf_discovery_remove_ifc 00:26:35.653 ************************************ 00:26:35.653 01:26:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:35.653 01:26:57 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:35.653 01:26:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:35.653 01:26:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:35.653 01:26:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:35.653 ************************************ 00:26:35.653 START TEST nvmf_identify_kernel_target 00:26:35.653 ************************************ 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:35.653 * Looking for test storage... 00:26:35.653 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:26:35.653 01:26:57 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:40.938 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:40.938 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:40.938 Found net devices under 0000:86:00.0: cvl_0_0 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:40.938 Found net devices under 0000:86:00.1: cvl_0_1 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:40.938 01:27:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:40.938 01:27:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:40.938 01:27:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:40.938 01:27:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:40.938 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:40.938 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:26:40.938 00:26:40.938 --- 10.0.0.2 ping statistics --- 00:26:40.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.938 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:26:40.938 01:27:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:40.938 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:40.938 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:26:40.938 00:26:40.938 --- 10.0.0.1 ping statistics --- 00:26:40.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.938 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:26:40.938 01:27:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:40.938 01:27:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:26:40.938 01:27:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:40.938 01:27:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:40.938 01:27:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:40.939 01:27:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:40.939 01:27:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:40.939 01:27:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:40.939 01:27:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:40.939 01:27:03 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:40.939 01:27:03 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:40.939 01:27:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:26:40.939 01:27:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:40.939 01:27:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:40.939 01:27:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.939 01:27:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.939 01:27:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:40.939 01:27:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.939 01:27:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:40.939 01:27:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:40.939 01:27:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:40.939 01:27:03 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:40.939 01:27:03 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:40.939 01:27:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:40.939 01:27:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:40.939 01:27:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:40.939 01:27:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:40.939 01:27:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:40.939 01:27:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:26:40.939 01:27:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:40.939 01:27:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:40.939 01:27:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:40.939 01:27:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:43.478 Waiting for block devices as requested 00:26:43.478 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:43.478 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:43.478 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:43.478 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:43.478 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:43.478 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:43.737 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:43.737 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:43.737 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:43.737 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:43.997 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:43.997 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:43.997 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:44.255 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:44.255 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:44.255 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:44.255 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:44.515 01:27:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:44.515 01:27:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:44.515 01:27:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:44.515 01:27:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:26:44.515 01:27:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:44.515 01:27:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:26:44.515 01:27:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:44.515 01:27:06 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:44.515 01:27:06 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:44.515 No valid GPT data, bailing 00:26:44.515 01:27:06 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:44.515 01:27:06 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:26:44.515 01:27:06 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:26:44.515 01:27:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:44.515 01:27:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:26:44.515 01:27:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:44.515 01:27:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:44.515 01:27:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:44.515 01:27:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:44.515 01:27:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:26:44.515 01:27:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:26:44.515 01:27:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:26:44.515 01:27:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:44.515 01:27:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:26:44.515 01:27:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:26:44.515 01:27:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:26:44.515 01:27:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:44.515 01:27:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:26:44.515 00:26:44.515 Discovery Log Number of Records 2, Generation counter 2 00:26:44.515 =====Discovery Log Entry 0====== 00:26:44.515 trtype: tcp 00:26:44.515 adrfam: ipv4 00:26:44.515 subtype: current discovery subsystem 00:26:44.515 treq: not specified, sq flow control disable supported 00:26:44.515 portid: 1 00:26:44.515 trsvcid: 4420 00:26:44.515 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:44.515 traddr: 10.0.0.1 00:26:44.515 eflags: none 00:26:44.515 sectype: none 00:26:44.515 =====Discovery Log Entry 1====== 00:26:44.515 trtype: tcp 00:26:44.515 adrfam: ipv4 00:26:44.515 subtype: nvme subsystem 00:26:44.515 treq: not specified, sq flow control disable supported 00:26:44.515 portid: 1 00:26:44.515 trsvcid: 4420 00:26:44.515 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:44.515 traddr: 10.0.0.1 00:26:44.515 eflags: none 00:26:44.515 sectype: none 00:26:44.515 01:27:06 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:44.515 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:44.515 EAL: No free 2048 kB hugepages reported on node 1 00:26:44.515 ===================================================== 00:26:44.515 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:44.515 ===================================================== 00:26:44.515 Controller Capabilities/Features 00:26:44.515 ================================ 00:26:44.515 Vendor ID: 0000 00:26:44.515 Subsystem Vendor ID: 0000 00:26:44.515 Serial Number: 37eddcdbc15204514c95 00:26:44.515 Model Number: Linux 00:26:44.515 Firmware Version: 6.7.0-68 00:26:44.515 Recommended Arb Burst: 0 00:26:44.515 IEEE OUI Identifier: 00 00 00 00:26:44.515 Multi-path I/O 00:26:44.515 May have multiple subsystem ports: No 00:26:44.515 May have multiple controllers: No 00:26:44.515 Associated with SR-IOV VF: No 00:26:44.515 Max Data Transfer Size: Unlimited 00:26:44.515 Max Number of Namespaces: 0 00:26:44.515 Max Number of I/O Queues: 1024 00:26:44.515 NVMe Specification Version (VS): 1.3 00:26:44.515 NVMe Specification Version (Identify): 1.3 00:26:44.515 Maximum Queue Entries: 1024 00:26:44.515 Contiguous Queues Required: No 00:26:44.515 Arbitration Mechanisms Supported 00:26:44.515 Weighted Round Robin: Not Supported 00:26:44.515 Vendor Specific: Not Supported 00:26:44.515 Reset Timeout: 7500 ms 00:26:44.515 Doorbell Stride: 4 bytes 00:26:44.515 NVM Subsystem Reset: Not Supported 00:26:44.516 Command Sets Supported 00:26:44.516 NVM Command Set: Supported 00:26:44.516 Boot Partition: Not Supported 00:26:44.516 Memory Page Size Minimum: 4096 bytes 00:26:44.516 Memory Page Size Maximum: 4096 bytes 00:26:44.516 Persistent Memory Region: Not Supported 00:26:44.516 Optional Asynchronous Events Supported 00:26:44.516 Namespace Attribute Notices: Not Supported 00:26:44.516 Firmware Activation Notices: Not Supported 00:26:44.516 ANA Change Notices: Not Supported 00:26:44.516 PLE Aggregate Log Change Notices: Not Supported 00:26:44.516 LBA Status Info Alert Notices: Not Supported 00:26:44.516 EGE Aggregate Log Change Notices: Not Supported 00:26:44.516 Normal NVM Subsystem Shutdown event: Not Supported 00:26:44.516 Zone Descriptor Change Notices: Not Supported 00:26:44.516 Discovery Log Change Notices: Supported 00:26:44.516 Controller Attributes 00:26:44.516 128-bit Host Identifier: Not Supported 00:26:44.516 Non-Operational Permissive Mode: Not Supported 00:26:44.516 NVM Sets: Not Supported 00:26:44.516 Read Recovery Levels: Not Supported 00:26:44.516 Endurance Groups: Not Supported 00:26:44.516 Predictable Latency Mode: Not Supported 00:26:44.516 Traffic Based Keep ALive: Not Supported 00:26:44.516 Namespace Granularity: Not Supported 00:26:44.516 SQ Associations: Not Supported 00:26:44.516 UUID List: Not Supported 00:26:44.516 Multi-Domain Subsystem: Not Supported 00:26:44.516 Fixed Capacity Management: Not Supported 00:26:44.516 Variable Capacity Management: Not Supported 00:26:44.516 Delete Endurance Group: Not Supported 00:26:44.516 Delete NVM Set: Not Supported 00:26:44.516 Extended LBA Formats Supported: Not Supported 00:26:44.516 Flexible Data Placement Supported: Not Supported 00:26:44.516 00:26:44.516 Controller Memory Buffer Support 00:26:44.516 ================================ 00:26:44.516 Supported: No 00:26:44.516 00:26:44.516 Persistent Memory Region Support 00:26:44.516 ================================ 00:26:44.516 Supported: No 00:26:44.516 00:26:44.516 Admin Command Set Attributes 00:26:44.516 ============================ 00:26:44.516 Security Send/Receive: Not Supported 00:26:44.516 Format NVM: Not Supported 00:26:44.516 Firmware Activate/Download: Not Supported 00:26:44.516 Namespace Management: Not Supported 00:26:44.516 Device Self-Test: Not Supported 00:26:44.516 Directives: Not Supported 00:26:44.516 NVMe-MI: Not Supported 00:26:44.516 Virtualization Management: Not Supported 00:26:44.516 Doorbell Buffer Config: Not Supported 00:26:44.516 Get LBA Status Capability: Not Supported 00:26:44.516 Command & Feature Lockdown Capability: Not Supported 00:26:44.516 Abort Command Limit: 1 00:26:44.516 Async Event Request Limit: 1 00:26:44.516 Number of Firmware Slots: N/A 00:26:44.516 Firmware Slot 1 Read-Only: N/A 00:26:44.516 Firmware Activation Without Reset: N/A 00:26:44.516 Multiple Update Detection Support: N/A 00:26:44.516 Firmware Update Granularity: No Information Provided 00:26:44.516 Per-Namespace SMART Log: No 00:26:44.516 Asymmetric Namespace Access Log Page: Not Supported 00:26:44.516 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:44.516 Command Effects Log Page: Not Supported 00:26:44.516 Get Log Page Extended Data: Supported 00:26:44.516 Telemetry Log Pages: Not Supported 00:26:44.516 Persistent Event Log Pages: Not Supported 00:26:44.516 Supported Log Pages Log Page: May Support 00:26:44.516 Commands Supported & Effects Log Page: Not Supported 00:26:44.516 Feature Identifiers & Effects Log Page:May Support 00:26:44.516 NVMe-MI Commands & Effects Log Page: May Support 00:26:44.516 Data Area 4 for Telemetry Log: Not Supported 00:26:44.516 Error Log Page Entries Supported: 1 00:26:44.516 Keep Alive: Not Supported 00:26:44.516 00:26:44.516 NVM Command Set Attributes 00:26:44.516 ========================== 00:26:44.516 Submission Queue Entry Size 00:26:44.516 Max: 1 00:26:44.516 Min: 1 00:26:44.516 Completion Queue Entry Size 00:26:44.516 Max: 1 00:26:44.516 Min: 1 00:26:44.516 Number of Namespaces: 0 00:26:44.516 Compare Command: Not Supported 00:26:44.516 Write Uncorrectable Command: Not Supported 00:26:44.516 Dataset Management Command: Not Supported 00:26:44.516 Write Zeroes Command: Not Supported 00:26:44.516 Set Features Save Field: Not Supported 00:26:44.516 Reservations: Not Supported 00:26:44.516 Timestamp: Not Supported 00:26:44.516 Copy: Not Supported 00:26:44.516 Volatile Write Cache: Not Present 00:26:44.516 Atomic Write Unit (Normal): 1 00:26:44.516 Atomic Write Unit (PFail): 1 00:26:44.516 Atomic Compare & Write Unit: 1 00:26:44.516 Fused Compare & Write: Not Supported 00:26:44.516 Scatter-Gather List 00:26:44.516 SGL Command Set: Supported 00:26:44.516 SGL Keyed: Not Supported 00:26:44.516 SGL Bit Bucket Descriptor: Not Supported 00:26:44.516 SGL Metadata Pointer: Not Supported 00:26:44.516 Oversized SGL: Not Supported 00:26:44.516 SGL Metadata Address: Not Supported 00:26:44.516 SGL Offset: Supported 00:26:44.516 Transport SGL Data Block: Not Supported 00:26:44.516 Replay Protected Memory Block: Not Supported 00:26:44.516 00:26:44.516 Firmware Slot Information 00:26:44.516 ========================= 00:26:44.516 Active slot: 0 00:26:44.516 00:26:44.516 00:26:44.516 Error Log 00:26:44.516 ========= 00:26:44.516 00:26:44.516 Active Namespaces 00:26:44.516 ================= 00:26:44.516 Discovery Log Page 00:26:44.516 ================== 00:26:44.516 Generation Counter: 2 00:26:44.516 Number of Records: 2 00:26:44.516 Record Format: 0 00:26:44.516 00:26:44.516 Discovery Log Entry 0 00:26:44.516 ---------------------- 00:26:44.516 Transport Type: 3 (TCP) 00:26:44.516 Address Family: 1 (IPv4) 00:26:44.516 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:44.516 Entry Flags: 00:26:44.516 Duplicate Returned Information: 0 00:26:44.516 Explicit Persistent Connection Support for Discovery: 0 00:26:44.516 Transport Requirements: 00:26:44.516 Secure Channel: Not Specified 00:26:44.516 Port ID: 1 (0x0001) 00:26:44.516 Controller ID: 65535 (0xffff) 00:26:44.516 Admin Max SQ Size: 32 00:26:44.516 Transport Service Identifier: 4420 00:26:44.516 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:44.516 Transport Address: 10.0.0.1 00:26:44.516 Discovery Log Entry 1 00:26:44.516 ---------------------- 00:26:44.516 Transport Type: 3 (TCP) 00:26:44.516 Address Family: 1 (IPv4) 00:26:44.516 Subsystem Type: 2 (NVM Subsystem) 00:26:44.516 Entry Flags: 00:26:44.516 Duplicate Returned Information: 0 00:26:44.516 Explicit Persistent Connection Support for Discovery: 0 00:26:44.516 Transport Requirements: 00:26:44.516 Secure Channel: Not Specified 00:26:44.516 Port ID: 1 (0x0001) 00:26:44.516 Controller ID: 65535 (0xffff) 00:26:44.516 Admin Max SQ Size: 32 00:26:44.516 Transport Service Identifier: 4420 00:26:44.516 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:44.516 Transport Address: 10.0.0.1 00:26:44.516 01:27:06 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:44.516 EAL: No free 2048 kB hugepages reported on node 1 00:26:44.516 get_feature(0x01) failed 00:26:44.516 get_feature(0x02) failed 00:26:44.516 get_feature(0x04) failed 00:26:44.516 ===================================================== 00:26:44.516 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:44.516 ===================================================== 00:26:44.516 Controller Capabilities/Features 00:26:44.516 ================================ 00:26:44.516 Vendor ID: 0000 00:26:44.516 Subsystem Vendor ID: 0000 00:26:44.516 Serial Number: c5f0c6141d793c81e6a9 00:26:44.516 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:44.516 Firmware Version: 6.7.0-68 00:26:44.516 Recommended Arb Burst: 6 00:26:44.516 IEEE OUI Identifier: 00 00 00 00:26:44.516 Multi-path I/O 00:26:44.516 May have multiple subsystem ports: Yes 00:26:44.516 May have multiple controllers: Yes 00:26:44.516 Associated with SR-IOV VF: No 00:26:44.516 Max Data Transfer Size: Unlimited 00:26:44.516 Max Number of Namespaces: 1024 00:26:44.516 Max Number of I/O Queues: 128 00:26:44.516 NVMe Specification Version (VS): 1.3 00:26:44.516 NVMe Specification Version (Identify): 1.3 00:26:44.517 Maximum Queue Entries: 1024 00:26:44.517 Contiguous Queues Required: No 00:26:44.517 Arbitration Mechanisms Supported 00:26:44.517 Weighted Round Robin: Not Supported 00:26:44.517 Vendor Specific: Not Supported 00:26:44.517 Reset Timeout: 7500 ms 00:26:44.517 Doorbell Stride: 4 bytes 00:26:44.517 NVM Subsystem Reset: Not Supported 00:26:44.517 Command Sets Supported 00:26:44.517 NVM Command Set: Supported 00:26:44.517 Boot Partition: Not Supported 00:26:44.517 Memory Page Size Minimum: 4096 bytes 00:26:44.517 Memory Page Size Maximum: 4096 bytes 00:26:44.517 Persistent Memory Region: Not Supported 00:26:44.517 Optional Asynchronous Events Supported 00:26:44.517 Namespace Attribute Notices: Supported 00:26:44.517 Firmware Activation Notices: Not Supported 00:26:44.517 ANA Change Notices: Supported 00:26:44.517 PLE Aggregate Log Change Notices: Not Supported 00:26:44.517 LBA Status Info Alert Notices: Not Supported 00:26:44.517 EGE Aggregate Log Change Notices: Not Supported 00:26:44.517 Normal NVM Subsystem Shutdown event: Not Supported 00:26:44.517 Zone Descriptor Change Notices: Not Supported 00:26:44.517 Discovery Log Change Notices: Not Supported 00:26:44.517 Controller Attributes 00:26:44.517 128-bit Host Identifier: Supported 00:26:44.517 Non-Operational Permissive Mode: Not Supported 00:26:44.517 NVM Sets: Not Supported 00:26:44.517 Read Recovery Levels: Not Supported 00:26:44.517 Endurance Groups: Not Supported 00:26:44.517 Predictable Latency Mode: Not Supported 00:26:44.517 Traffic Based Keep ALive: Supported 00:26:44.517 Namespace Granularity: Not Supported 00:26:44.517 SQ Associations: Not Supported 00:26:44.517 UUID List: Not Supported 00:26:44.517 Multi-Domain Subsystem: Not Supported 00:26:44.517 Fixed Capacity Management: Not Supported 00:26:44.517 Variable Capacity Management: Not Supported 00:26:44.517 Delete Endurance Group: Not Supported 00:26:44.517 Delete NVM Set: Not Supported 00:26:44.517 Extended LBA Formats Supported: Not Supported 00:26:44.517 Flexible Data Placement Supported: Not Supported 00:26:44.517 00:26:44.517 Controller Memory Buffer Support 00:26:44.517 ================================ 00:26:44.517 Supported: No 00:26:44.517 00:26:44.517 Persistent Memory Region Support 00:26:44.517 ================================ 00:26:44.517 Supported: No 00:26:44.517 00:26:44.517 Admin Command Set Attributes 00:26:44.517 ============================ 00:26:44.517 Security Send/Receive: Not Supported 00:26:44.517 Format NVM: Not Supported 00:26:44.517 Firmware Activate/Download: Not Supported 00:26:44.517 Namespace Management: Not Supported 00:26:44.517 Device Self-Test: Not Supported 00:26:44.517 Directives: Not Supported 00:26:44.517 NVMe-MI: Not Supported 00:26:44.517 Virtualization Management: Not Supported 00:26:44.517 Doorbell Buffer Config: Not Supported 00:26:44.517 Get LBA Status Capability: Not Supported 00:26:44.517 Command & Feature Lockdown Capability: Not Supported 00:26:44.517 Abort Command Limit: 4 00:26:44.517 Async Event Request Limit: 4 00:26:44.517 Number of Firmware Slots: N/A 00:26:44.517 Firmware Slot 1 Read-Only: N/A 00:26:44.517 Firmware Activation Without Reset: N/A 00:26:44.517 Multiple Update Detection Support: N/A 00:26:44.517 Firmware Update Granularity: No Information Provided 00:26:44.517 Per-Namespace SMART Log: Yes 00:26:44.517 Asymmetric Namespace Access Log Page: Supported 00:26:44.517 ANA Transition Time : 10 sec 00:26:44.517 00:26:44.517 Asymmetric Namespace Access Capabilities 00:26:44.517 ANA Optimized State : Supported 00:26:44.517 ANA Non-Optimized State : Supported 00:26:44.517 ANA Inaccessible State : Supported 00:26:44.517 ANA Persistent Loss State : Supported 00:26:44.517 ANA Change State : Supported 00:26:44.517 ANAGRPID is not changed : No 00:26:44.517 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:44.517 00:26:44.517 ANA Group Identifier Maximum : 128 00:26:44.517 Number of ANA Group Identifiers : 128 00:26:44.517 Max Number of Allowed Namespaces : 1024 00:26:44.517 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:44.517 Command Effects Log Page: Supported 00:26:44.517 Get Log Page Extended Data: Supported 00:26:44.517 Telemetry Log Pages: Not Supported 00:26:44.517 Persistent Event Log Pages: Not Supported 00:26:44.517 Supported Log Pages Log Page: May Support 00:26:44.517 Commands Supported & Effects Log Page: Not Supported 00:26:44.517 Feature Identifiers & Effects Log Page:May Support 00:26:44.517 NVMe-MI Commands & Effects Log Page: May Support 00:26:44.517 Data Area 4 for Telemetry Log: Not Supported 00:26:44.517 Error Log Page Entries Supported: 128 00:26:44.517 Keep Alive: Supported 00:26:44.517 Keep Alive Granularity: 1000 ms 00:26:44.517 00:26:44.517 NVM Command Set Attributes 00:26:44.517 ========================== 00:26:44.517 Submission Queue Entry Size 00:26:44.517 Max: 64 00:26:44.517 Min: 64 00:26:44.517 Completion Queue Entry Size 00:26:44.517 Max: 16 00:26:44.517 Min: 16 00:26:44.517 Number of Namespaces: 1024 00:26:44.517 Compare Command: Not Supported 00:26:44.517 Write Uncorrectable Command: Not Supported 00:26:44.517 Dataset Management Command: Supported 00:26:44.517 Write Zeroes Command: Supported 00:26:44.517 Set Features Save Field: Not Supported 00:26:44.517 Reservations: Not Supported 00:26:44.517 Timestamp: Not Supported 00:26:44.517 Copy: Not Supported 00:26:44.517 Volatile Write Cache: Present 00:26:44.517 Atomic Write Unit (Normal): 1 00:26:44.517 Atomic Write Unit (PFail): 1 00:26:44.517 Atomic Compare & Write Unit: 1 00:26:44.517 Fused Compare & Write: Not Supported 00:26:44.517 Scatter-Gather List 00:26:44.517 SGL Command Set: Supported 00:26:44.517 SGL Keyed: Not Supported 00:26:44.517 SGL Bit Bucket Descriptor: Not Supported 00:26:44.517 SGL Metadata Pointer: Not Supported 00:26:44.517 Oversized SGL: Not Supported 00:26:44.517 SGL Metadata Address: Not Supported 00:26:44.517 SGL Offset: Supported 00:26:44.517 Transport SGL Data Block: Not Supported 00:26:44.517 Replay Protected Memory Block: Not Supported 00:26:44.517 00:26:44.517 Firmware Slot Information 00:26:44.517 ========================= 00:26:44.517 Active slot: 0 00:26:44.517 00:26:44.517 Asymmetric Namespace Access 00:26:44.517 =========================== 00:26:44.517 Change Count : 0 00:26:44.517 Number of ANA Group Descriptors : 1 00:26:44.517 ANA Group Descriptor : 0 00:26:44.517 ANA Group ID : 1 00:26:44.517 Number of NSID Values : 1 00:26:44.517 Change Count : 0 00:26:44.517 ANA State : 1 00:26:44.517 Namespace Identifier : 1 00:26:44.517 00:26:44.517 Commands Supported and Effects 00:26:44.517 ============================== 00:26:44.517 Admin Commands 00:26:44.517 -------------- 00:26:44.517 Get Log Page (02h): Supported 00:26:44.517 Identify (06h): Supported 00:26:44.517 Abort (08h): Supported 00:26:44.517 Set Features (09h): Supported 00:26:44.517 Get Features (0Ah): Supported 00:26:44.517 Asynchronous Event Request (0Ch): Supported 00:26:44.517 Keep Alive (18h): Supported 00:26:44.517 I/O Commands 00:26:44.517 ------------ 00:26:44.517 Flush (00h): Supported 00:26:44.517 Write (01h): Supported LBA-Change 00:26:44.517 Read (02h): Supported 00:26:44.517 Write Zeroes (08h): Supported LBA-Change 00:26:44.517 Dataset Management (09h): Supported 00:26:44.517 00:26:44.517 Error Log 00:26:44.517 ========= 00:26:44.517 Entry: 0 00:26:44.517 Error Count: 0x3 00:26:44.517 Submission Queue Id: 0x0 00:26:44.517 Command Id: 0x5 00:26:44.517 Phase Bit: 0 00:26:44.517 Status Code: 0x2 00:26:44.517 Status Code Type: 0x0 00:26:44.517 Do Not Retry: 1 00:26:44.517 Error Location: 0x28 00:26:44.517 LBA: 0x0 00:26:44.517 Namespace: 0x0 00:26:44.517 Vendor Log Page: 0x0 00:26:44.517 ----------- 00:26:44.517 Entry: 1 00:26:44.517 Error Count: 0x2 00:26:44.517 Submission Queue Id: 0x0 00:26:44.517 Command Id: 0x5 00:26:44.517 Phase Bit: 0 00:26:44.518 Status Code: 0x2 00:26:44.518 Status Code Type: 0x0 00:26:44.518 Do Not Retry: 1 00:26:44.518 Error Location: 0x28 00:26:44.518 LBA: 0x0 00:26:44.518 Namespace: 0x0 00:26:44.518 Vendor Log Page: 0x0 00:26:44.518 ----------- 00:26:44.518 Entry: 2 00:26:44.518 Error Count: 0x1 00:26:44.518 Submission Queue Id: 0x0 00:26:44.518 Command Id: 0x4 00:26:44.518 Phase Bit: 0 00:26:44.518 Status Code: 0x2 00:26:44.518 Status Code Type: 0x0 00:26:44.518 Do Not Retry: 1 00:26:44.518 Error Location: 0x28 00:26:44.518 LBA: 0x0 00:26:44.518 Namespace: 0x0 00:26:44.518 Vendor Log Page: 0x0 00:26:44.518 00:26:44.518 Number of Queues 00:26:44.518 ================ 00:26:44.518 Number of I/O Submission Queues: 128 00:26:44.518 Number of I/O Completion Queues: 128 00:26:44.518 00:26:44.518 ZNS Specific Controller Data 00:26:44.518 ============================ 00:26:44.518 Zone Append Size Limit: 0 00:26:44.518 00:26:44.518 00:26:44.518 Active Namespaces 00:26:44.518 ================= 00:26:44.518 get_feature(0x05) failed 00:26:44.518 Namespace ID:1 00:26:44.518 Command Set Identifier: NVM (00h) 00:26:44.518 Deallocate: Supported 00:26:44.518 Deallocated/Unwritten Error: Not Supported 00:26:44.518 Deallocated Read Value: Unknown 00:26:44.518 Deallocate in Write Zeroes: Not Supported 00:26:44.518 Deallocated Guard Field: 0xFFFF 00:26:44.518 Flush: Supported 00:26:44.518 Reservation: Not Supported 00:26:44.518 Namespace Sharing Capabilities: Multiple Controllers 00:26:44.518 Size (in LBAs): 1953525168 (931GiB) 00:26:44.518 Capacity (in LBAs): 1953525168 (931GiB) 00:26:44.518 Utilization (in LBAs): 1953525168 (931GiB) 00:26:44.518 UUID: f6ab51a3-3f4c-4451-a53c-c98aad5245bb 00:26:44.518 Thin Provisioning: Not Supported 00:26:44.518 Per-NS Atomic Units: Yes 00:26:44.518 Atomic Boundary Size (Normal): 0 00:26:44.518 Atomic Boundary Size (PFail): 0 00:26:44.518 Atomic Boundary Offset: 0 00:26:44.518 NGUID/EUI64 Never Reused: No 00:26:44.518 ANA group ID: 1 00:26:44.518 Namespace Write Protected: No 00:26:44.518 Number of LBA Formats: 1 00:26:44.518 Current LBA Format: LBA Format #00 00:26:44.518 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:44.518 00:26:44.518 01:27:06 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:44.518 01:27:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:44.518 01:27:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:26:44.518 01:27:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:44.518 01:27:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:26:44.518 01:27:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:44.518 01:27:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:44.518 rmmod nvme_tcp 00:26:44.518 rmmod nvme_fabrics 00:26:44.776 01:27:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:44.776 01:27:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:26:44.776 01:27:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:26:44.776 01:27:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:26:44.776 01:27:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:44.777 01:27:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:44.777 01:27:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:44.777 01:27:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:44.777 01:27:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:44.777 01:27:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:44.777 01:27:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:44.777 01:27:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:46.687 01:27:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:46.687 01:27:09 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:46.687 01:27:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:46.687 01:27:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:26:46.687 01:27:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:46.687 01:27:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:46.687 01:27:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:46.687 01:27:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:46.687 01:27:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:46.687 01:27:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:46.687 01:27:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:49.230 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:49.230 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:49.230 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:49.230 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:49.230 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:49.230 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:49.230 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:49.230 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:49.230 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:49.230 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:49.230 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:49.230 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:49.230 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:49.230 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:49.230 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:49.230 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:50.171 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:50.171 00:26:50.171 real 0m14.725s 00:26:50.171 user 0m3.493s 00:26:50.171 sys 0m7.436s 00:26:50.171 01:27:12 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:50.171 01:27:12 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:50.171 ************************************ 00:26:50.171 END TEST nvmf_identify_kernel_target 00:26:50.171 ************************************ 00:26:50.171 01:27:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:50.171 01:27:12 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:50.171 01:27:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:50.171 01:27:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:50.171 01:27:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:50.171 ************************************ 00:26:50.171 START TEST nvmf_auth_host 00:26:50.171 ************************************ 00:26:50.171 01:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:50.171 * Looking for test storage... 00:26:50.171 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:50.171 01:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:50.171 01:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:50.171 01:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:50.171 01:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:50.171 01:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:50.171 01:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:50.171 01:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:50.171 01:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:50.171 01:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:26:50.172 01:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.519 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:55.519 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:26:55.519 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:55.519 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:55.519 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:55.519 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:55.519 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:55.519 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:26:55.519 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:55.519 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:26:55.519 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:26:55.519 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:26:55.519 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:26:55.519 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:26:55.519 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:26:55.519 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:55.519 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:55.519 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:55.519 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:55.519 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:55.519 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:55.519 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:55.519 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:55.520 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:55.520 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:55.520 Found net devices under 0000:86:00.0: cvl_0_0 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:55.520 Found net devices under 0000:86:00.1: cvl_0_1 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:55.520 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:55.520 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:26:55.520 00:26:55.520 --- 10.0.0.2 ping statistics --- 00:26:55.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.520 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:55.520 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:55.520 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.436 ms 00:26:55.520 00:26:55.520 --- 10.0.0.1 ping statistics --- 00:26:55.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.520 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1029988 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1029988 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1029988 ']' 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:55.520 01:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.459 01:27:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:56.459 01:27:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:26:56.459 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:56.459 01:27:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:56.459 01:27:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.459 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:56.459 01:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a874a87476803992fde2475afaad6ad9 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.cww 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a874a87476803992fde2475afaad6ad9 0 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a874a87476803992fde2475afaad6ad9 0 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a874a87476803992fde2475afaad6ad9 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.cww 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.cww 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.cww 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=de6f0df4b524814424ad0128625de54c2820dcdf4a7dd766f1f7fff248f5927f 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.6qd 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key de6f0df4b524814424ad0128625de54c2820dcdf4a7dd766f1f7fff248f5927f 3 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 de6f0df4b524814424ad0128625de54c2820dcdf4a7dd766f1f7fff248f5927f 3 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=de6f0df4b524814424ad0128625de54c2820dcdf4a7dd766f1f7fff248f5927f 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.6qd 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.6qd 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.6qd 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b1875d5aabfbef297342d6312baddcdc0d784e65dda27176 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ONQ 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b1875d5aabfbef297342d6312baddcdc0d784e65dda27176 0 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b1875d5aabfbef297342d6312baddcdc0d784e65dda27176 0 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b1875d5aabfbef297342d6312baddcdc0d784e65dda27176 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:56.460 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:56.721 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ONQ 00:26:56.721 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ONQ 00:26:56.721 01:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.ONQ 00:26:56.721 01:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:56.721 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:56.721 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:56.721 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:56.721 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:26:56.721 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:56.721 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:56.721 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cc2dd898d348072ca41df9acb75b498e3230aec75296256e 00:26:56.721 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:26:56.721 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.9hX 00:26:56.721 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cc2dd898d348072ca41df9acb75b498e3230aec75296256e 2 00:26:56.721 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cc2dd898d348072ca41df9acb75b498e3230aec75296256e 2 00:26:56.721 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:56.721 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:56.721 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cc2dd898d348072ca41df9acb75b498e3230aec75296256e 00:26:56.721 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:26:56.721 01:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.9hX 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.9hX 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.9hX 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=25ccdc3feabefa44144ef5d59e1e0f53 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.e2L 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 25ccdc3feabefa44144ef5d59e1e0f53 1 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 25ccdc3feabefa44144ef5d59e1e0f53 1 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=25ccdc3feabefa44144ef5d59e1e0f53 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.e2L 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.e2L 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.e2L 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=10795b6e10e6f92a3a08bc5c5b0854ad 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.2YJ 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 10795b6e10e6f92a3a08bc5c5b0854ad 1 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 10795b6e10e6f92a3a08bc5c5b0854ad 1 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=10795b6e10e6f92a3a08bc5c5b0854ad 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.2YJ 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.2YJ 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.2YJ 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1cb2c9705f3bae30dc8be635397ba8f350a2f41aa30f67a8 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.wiV 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1cb2c9705f3bae30dc8be635397ba8f350a2f41aa30f67a8 2 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1cb2c9705f3bae30dc8be635397ba8f350a2f41aa30f67a8 2 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1cb2c9705f3bae30dc8be635397ba8f350a2f41aa30f67a8 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.wiV 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.wiV 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.wiV 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:56.721 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:56.722 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:56.722 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:56.722 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:56.722 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:56.722 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=27078362eabf304587ed59727b68381c 00:26:56.722 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:56.722 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.zJb 00:26:56.722 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 27078362eabf304587ed59727b68381c 0 00:26:56.722 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 27078362eabf304587ed59727b68381c 0 00:26:56.722 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:56.722 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:56.722 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=27078362eabf304587ed59727b68381c 00:26:56.722 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:56.722 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:56.982 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.zJb 00:26:56.982 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.zJb 00:26:56.982 01:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.zJb 00:26:56.982 01:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:56.982 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:56.982 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:56.982 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:56.982 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:26:56.982 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:26:56.982 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:56.982 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=732942081eb76d7f609b8b039367c792dcb943441556cad60fd643ed1db4faa9 00:26:56.982 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:26:56.982 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.zLH 00:26:56.982 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 732942081eb76d7f609b8b039367c792dcb943441556cad60fd643ed1db4faa9 3 00:26:56.982 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 732942081eb76d7f609b8b039367c792dcb943441556cad60fd643ed1db4faa9 3 00:26:56.982 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:56.982 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:56.982 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=732942081eb76d7f609b8b039367c792dcb943441556cad60fd643ed1db4faa9 00:26:56.982 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:26:56.982 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:56.982 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.zLH 00:26:56.982 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.zLH 00:26:56.982 01:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.zLH 00:26:56.982 01:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:56.982 01:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1029988 00:26:56.982 01:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1029988 ']' 00:26:56.982 01:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:56.982 01:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:56.982 01:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:56.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:56.982 01:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:56.982 01:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.982 01:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:56.982 01:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:26:56.982 01:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:56.982 01:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.cww 00:26:56.982 01:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.982 01:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.6qd ]] 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.6qd 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.ONQ 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.9hX ]] 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.9hX 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.e2L 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.2YJ ]] 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.2YJ 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.wiV 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.zJb ]] 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.zJb 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.zLH 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:57.242 01:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:59.783 Waiting for block devices as requested 00:26:59.783 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:59.783 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:00.042 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:00.042 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:00.042 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:00.301 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:00.301 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:00.301 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:00.301 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:00.559 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:00.559 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:00.559 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:00.559 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:00.818 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:00.819 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:00.819 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:01.077 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:01.644 01:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:01.644 01:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:01.644 01:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:01.644 01:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:01.644 01:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:01.644 01:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:01.644 01:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:01.644 01:27:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:01.644 01:27:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:01.644 No valid GPT data, bailing 00:27:01.644 01:27:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:01.644 01:27:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:27:01.644 01:27:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:27:01.644 01:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:01.644 01:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:01.644 01:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:01.644 01:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:01.644 01:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:01.644 01:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:01.644 01:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:27:01.644 01:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:01.644 01:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:27:01.644 01:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:01.644 01:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:27:01.644 01:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:27:01.644 01:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:27:01.644 01:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:01.644 01:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:27:01.644 00:27:01.644 Discovery Log Number of Records 2, Generation counter 2 00:27:01.644 =====Discovery Log Entry 0====== 00:27:01.644 trtype: tcp 00:27:01.644 adrfam: ipv4 00:27:01.644 subtype: current discovery subsystem 00:27:01.644 treq: not specified, sq flow control disable supported 00:27:01.644 portid: 1 00:27:01.644 trsvcid: 4420 00:27:01.645 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:01.645 traddr: 10.0.0.1 00:27:01.645 eflags: none 00:27:01.645 sectype: none 00:27:01.645 =====Discovery Log Entry 1====== 00:27:01.645 trtype: tcp 00:27:01.645 adrfam: ipv4 00:27:01.645 subtype: nvme subsystem 00:27:01.645 treq: not specified, sq flow control disable supported 00:27:01.645 portid: 1 00:27:01.645 trsvcid: 4420 00:27:01.645 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:01.645 traddr: 10.0.0.1 00:27:01.645 eflags: none 00:27:01.645 sectype: none 00:27:01.645 01:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:01.645 01:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:01.645 01:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:01.645 01:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:01.645 01:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.645 01:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:01.645 01:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:01.645 01:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:01.645 01:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjE4NzVkNWFhYmZiZWYyOTczNDJkNjMxMmJhZGRjZGMwZDc4NGU2NWRkYTI3MTc2/13YOg==: 00:27:01.645 01:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: 00:27:01.645 01:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:01.645 01:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:01.645 01:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjE4NzVkNWFhYmZiZWYyOTczNDJkNjMxMmJhZGRjZGMwZDc4NGU2NWRkYTI3MTc2/13YOg==: 00:27:01.645 01:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: ]] 00:27:01.645 01:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: 00:27:01.645 01:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:01.645 01:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:01.645 01:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:01.645 01:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:01.645 01:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:01.645 01:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.645 01:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:01.645 01:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:01.645 01:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:01.645 01:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.645 01:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:01.645 01:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.645 01:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.645 01:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.645 01:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.645 01:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:01.645 01:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:01.645 01:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:01.645 01:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.645 01:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.645 01:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:01.645 01:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.645 01:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:01.645 01:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:01.645 01:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:01.645 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:01.645 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.645 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.645 nvme0n1 00:27:01.645 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.645 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.645 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.645 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.645 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.904 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.904 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.904 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.904 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.904 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.904 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.904 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:01.904 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:01.904 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.904 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:01.904 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.904 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:01.904 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:01.904 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:01.904 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTg3NGE4NzQ3NjgwMzk5MmZkZTI0NzVhZmFhZDZhZDnvaRoE: 00:27:01.905 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU2ZjBkZjRiNTI0ODE0NDI0YWQwMTI4NjI1ZGU1NGMyODIwZGNkZjRhN2RkNzY2ZjFmN2ZmZjI0OGY1OTI3ZsNxQrE=: 00:27:01.905 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:01.905 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:01.905 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTg3NGE4NzQ3NjgwMzk5MmZkZTI0NzVhZmFhZDZhZDnvaRoE: 00:27:01.905 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU2ZjBkZjRiNTI0ODE0NDI0YWQwMTI4NjI1ZGU1NGMyODIwZGNkZjRhN2RkNzY2ZjFmN2ZmZjI0OGY1OTI3ZsNxQrE=: ]] 00:27:01.905 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU2ZjBkZjRiNTI0ODE0NDI0YWQwMTI4NjI1ZGU1NGMyODIwZGNkZjRhN2RkNzY2ZjFmN2ZmZjI0OGY1OTI3ZsNxQrE=: 00:27:01.905 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:01.905 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.905 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:01.905 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:01.905 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:01.905 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.905 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:01.905 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.905 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.905 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.905 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.905 01:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:01.905 01:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:01.905 01:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:01.905 01:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.905 01:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.905 01:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:01.905 01:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.905 01:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:01.905 01:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:01.905 01:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:01.905 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:01.905 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.905 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.905 nvme0n1 00:27:01.905 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.905 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.905 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.905 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.905 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.905 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.905 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.905 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.905 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.905 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjE4NzVkNWFhYmZiZWYyOTczNDJkNjMxMmJhZGRjZGMwZDc4NGU2NWRkYTI3MTc2/13YOg==: 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjE4NzVkNWFhYmZiZWYyOTczNDJkNjMxMmJhZGRjZGMwZDc4NGU2NWRkYTI3MTc2/13YOg==: 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: ]] 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.165 nvme0n1 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:02.165 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjVjY2RjM2ZlYWJlZmE0NDE0NGVmNWQ1OWUxZTBmNTOgpRPI: 00:27:02.166 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTA3OTViNmUxMGU2ZjkyYTNhMDhiYzVjNWIwODU0YWRAglV+: 00:27:02.166 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:02.166 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:02.166 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjVjY2RjM2ZlYWJlZmE0NDE0NGVmNWQ1OWUxZTBmNTOgpRPI: 00:27:02.166 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTA3OTViNmUxMGU2ZjkyYTNhMDhiYzVjNWIwODU0YWRAglV+: ]] 00:27:02.166 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTA3OTViNmUxMGU2ZjkyYTNhMDhiYzVjNWIwODU0YWRAglV+: 00:27:02.166 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:02.166 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.166 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:02.166 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:02.166 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:02.166 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.166 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:02.166 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.166 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.166 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.166 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.166 01:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:02.166 01:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:02.166 01:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:02.166 01:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.166 01:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.166 01:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:02.166 01:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.166 01:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:02.166 01:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:02.166 01:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:02.166 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:02.166 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.166 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.426 nvme0n1 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWNiMmM5NzA1ZjNiYWUzMGRjOGJlNjM1Mzk3YmE4ZjM1MGEyZjQxYWEzMGY2N2E4cni5Qg==: 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjcwNzgzNjJlYWJmMzA0NTg3ZWQ1OTcyN2I2ODM4MWNnMXAr: 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWNiMmM5NzA1ZjNiYWUzMGRjOGJlNjM1Mzk3YmE4ZjM1MGEyZjQxYWEzMGY2N2E4cni5Qg==: 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjcwNzgzNjJlYWJmMzA0NTg3ZWQ1OTcyN2I2ODM4MWNnMXAr: ]] 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjcwNzgzNjJlYWJmMzA0NTg3ZWQ1OTcyN2I2ODM4MWNnMXAr: 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.426 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.685 nvme0n1 00:27:02.685 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.685 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.685 01:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.685 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.685 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.685 01:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.685 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.685 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.685 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.685 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.685 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.685 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.685 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:02.685 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.685 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:02.685 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:02.685 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:02.685 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzMyOTQyMDgxZWI3NmQ3ZjYwOWI4YjAzOTM2N2M3OTJkY2I5NDM0NDE1NTZjYWQ2MGZkNjQzZWQxZGI0ZmFhOYVRKJM=: 00:27:02.685 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:02.685 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:02.685 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:02.685 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzMyOTQyMDgxZWI3NmQ3ZjYwOWI4YjAzOTM2N2M3OTJkY2I5NDM0NDE1NTZjYWQ2MGZkNjQzZWQxZGI0ZmFhOYVRKJM=: 00:27:02.685 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:02.685 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:02.685 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.686 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:02.686 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:02.686 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:02.686 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.686 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:02.686 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.686 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.686 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.686 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.686 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:02.686 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:02.686 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:02.686 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.686 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.686 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:02.686 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.686 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:02.686 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:02.686 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:02.686 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:02.686 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.686 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.945 nvme0n1 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTg3NGE4NzQ3NjgwMzk5MmZkZTI0NzVhZmFhZDZhZDnvaRoE: 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU2ZjBkZjRiNTI0ODE0NDI0YWQwMTI4NjI1ZGU1NGMyODIwZGNkZjRhN2RkNzY2ZjFmN2ZmZjI0OGY1OTI3ZsNxQrE=: 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTg3NGE4NzQ3NjgwMzk5MmZkZTI0NzVhZmFhZDZhZDnvaRoE: 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU2ZjBkZjRiNTI0ODE0NDI0YWQwMTI4NjI1ZGU1NGMyODIwZGNkZjRhN2RkNzY2ZjFmN2ZmZjI0OGY1OTI3ZsNxQrE=: ]] 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU2ZjBkZjRiNTI0ODE0NDI0YWQwMTI4NjI1ZGU1NGMyODIwZGNkZjRhN2RkNzY2ZjFmN2ZmZjI0OGY1OTI3ZsNxQrE=: 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.945 nvme0n1 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.945 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjE4NzVkNWFhYmZiZWYyOTczNDJkNjMxMmJhZGRjZGMwZDc4NGU2NWRkYTI3MTc2/13YOg==: 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjE4NzVkNWFhYmZiZWYyOTczNDJkNjMxMmJhZGRjZGMwZDc4NGU2NWRkYTI3MTc2/13YOg==: 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: ]] 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.205 nvme0n1 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.205 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.465 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.465 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.465 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:03.465 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.465 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:03.465 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:03.465 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:03.465 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjVjY2RjM2ZlYWJlZmE0NDE0NGVmNWQ1OWUxZTBmNTOgpRPI: 00:27:03.465 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTA3OTViNmUxMGU2ZjkyYTNhMDhiYzVjNWIwODU0YWRAglV+: 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjVjY2RjM2ZlYWJlZmE0NDE0NGVmNWQ1OWUxZTBmNTOgpRPI: 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTA3OTViNmUxMGU2ZjkyYTNhMDhiYzVjNWIwODU0YWRAglV+: ]] 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTA3OTViNmUxMGU2ZjkyYTNhMDhiYzVjNWIwODU0YWRAglV+: 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.466 nvme0n1 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWNiMmM5NzA1ZjNiYWUzMGRjOGJlNjM1Mzk3YmE4ZjM1MGEyZjQxYWEzMGY2N2E4cni5Qg==: 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjcwNzgzNjJlYWJmMzA0NTg3ZWQ1OTcyN2I2ODM4MWNnMXAr: 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWNiMmM5NzA1ZjNiYWUzMGRjOGJlNjM1Mzk3YmE4ZjM1MGEyZjQxYWEzMGY2N2E4cni5Qg==: 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjcwNzgzNjJlYWJmMzA0NTg3ZWQ1OTcyN2I2ODM4MWNnMXAr: ]] 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjcwNzgzNjJlYWJmMzA0NTg3ZWQ1OTcyN2I2ODM4MWNnMXAr: 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.466 01:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.726 nvme0n1 00:27:03.726 01:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.726 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.726 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.726 01:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.726 01:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.726 01:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.726 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.726 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.726 01:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.726 01:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.727 01:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.727 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.727 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:03.727 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.727 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:03.727 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:03.727 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:03.727 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzMyOTQyMDgxZWI3NmQ3ZjYwOWI4YjAzOTM2N2M3OTJkY2I5NDM0NDE1NTZjYWQ2MGZkNjQzZWQxZGI0ZmFhOYVRKJM=: 00:27:03.727 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:03.727 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:03.727 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:03.727 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzMyOTQyMDgxZWI3NmQ3ZjYwOWI4YjAzOTM2N2M3OTJkY2I5NDM0NDE1NTZjYWQ2MGZkNjQzZWQxZGI0ZmFhOYVRKJM=: 00:27:03.727 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:03.727 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:03.727 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.727 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:03.727 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:03.727 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:03.727 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.727 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:03.727 01:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.727 01:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.727 01:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.727 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.727 01:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:03.727 01:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:03.727 01:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:03.727 01:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.727 01:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.727 01:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:03.727 01:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.727 01:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:03.727 01:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:03.727 01:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:03.727 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:03.727 01:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.727 01:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.986 nvme0n1 00:27:03.986 01:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.986 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.986 01:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTg3NGE4NzQ3NjgwMzk5MmZkZTI0NzVhZmFhZDZhZDnvaRoE: 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU2ZjBkZjRiNTI0ODE0NDI0YWQwMTI4NjI1ZGU1NGMyODIwZGNkZjRhN2RkNzY2ZjFmN2ZmZjI0OGY1OTI3ZsNxQrE=: 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTg3NGE4NzQ3NjgwMzk5MmZkZTI0NzVhZmFhZDZhZDnvaRoE: 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU2ZjBkZjRiNTI0ODE0NDI0YWQwMTI4NjI1ZGU1NGMyODIwZGNkZjRhN2RkNzY2ZjFmN2ZmZjI0OGY1OTI3ZsNxQrE=: ]] 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU2ZjBkZjRiNTI0ODE0NDI0YWQwMTI4NjI1ZGU1NGMyODIwZGNkZjRhN2RkNzY2ZjFmN2ZmZjI0OGY1OTI3ZsNxQrE=: 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.987 01:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.247 nvme0n1 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjE4NzVkNWFhYmZiZWYyOTczNDJkNjMxMmJhZGRjZGMwZDc4NGU2NWRkYTI3MTc2/13YOg==: 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjE4NzVkNWFhYmZiZWYyOTczNDJkNjMxMmJhZGRjZGMwZDc4NGU2NWRkYTI3MTc2/13YOg==: 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: ]] 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.247 01:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.506 nvme0n1 00:27:04.506 01:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.506 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.506 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.507 01:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.507 01:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.507 01:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.507 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.507 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.507 01:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.507 01:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.507 01:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.507 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.507 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:04.507 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.507 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:04.507 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:04.507 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:04.507 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjVjY2RjM2ZlYWJlZmE0NDE0NGVmNWQ1OWUxZTBmNTOgpRPI: 00:27:04.507 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTA3OTViNmUxMGU2ZjkyYTNhMDhiYzVjNWIwODU0YWRAglV+: 00:27:04.507 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:04.507 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:04.507 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjVjY2RjM2ZlYWJlZmE0NDE0NGVmNWQ1OWUxZTBmNTOgpRPI: 00:27:04.507 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTA3OTViNmUxMGU2ZjkyYTNhMDhiYzVjNWIwODU0YWRAglV+: ]] 00:27:04.507 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTA3OTViNmUxMGU2ZjkyYTNhMDhiYzVjNWIwODU0YWRAglV+: 00:27:04.507 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:04.507 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.507 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:04.507 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:04.507 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:04.507 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.507 01:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:04.507 01:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.507 01:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.767 01:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.767 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.767 01:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:04.767 01:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:04.767 01:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:04.767 01:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.767 01:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.767 01:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:04.767 01:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.767 01:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:04.767 01:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:04.767 01:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:04.767 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:04.767 01:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.767 01:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.767 nvme0n1 00:27:04.767 01:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.767 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.767 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.767 01:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.767 01:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.767 01:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.027 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.027 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.027 01:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.027 01:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.027 01:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.027 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.027 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:05.027 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.027 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:05.027 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:05.027 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:05.027 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWNiMmM5NzA1ZjNiYWUzMGRjOGJlNjM1Mzk3YmE4ZjM1MGEyZjQxYWEzMGY2N2E4cni5Qg==: 00:27:05.027 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjcwNzgzNjJlYWJmMzA0NTg3ZWQ1OTcyN2I2ODM4MWNnMXAr: 00:27:05.027 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:05.027 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:05.027 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWNiMmM5NzA1ZjNiYWUzMGRjOGJlNjM1Mzk3YmE4ZjM1MGEyZjQxYWEzMGY2N2E4cni5Qg==: 00:27:05.027 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjcwNzgzNjJlYWJmMzA0NTg3ZWQ1OTcyN2I2ODM4MWNnMXAr: ]] 00:27:05.027 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjcwNzgzNjJlYWJmMzA0NTg3ZWQ1OTcyN2I2ODM4MWNnMXAr: 00:27:05.027 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:05.027 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.027 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:05.027 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:05.027 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:05.028 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.028 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:05.028 01:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.028 01:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.028 01:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.028 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.028 01:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:05.028 01:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:05.028 01:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:05.028 01:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.028 01:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.028 01:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:05.028 01:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.028 01:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:05.028 01:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:05.028 01:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:05.028 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:05.028 01:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.028 01:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.287 nvme0n1 00:27:05.287 01:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.287 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.287 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.287 01:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.288 01:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.288 01:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.288 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.288 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.288 01:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.288 01:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.288 01:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.288 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.288 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:05.288 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.288 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:05.288 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:05.288 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:05.288 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzMyOTQyMDgxZWI3NmQ3ZjYwOWI4YjAzOTM2N2M3OTJkY2I5NDM0NDE1NTZjYWQ2MGZkNjQzZWQxZGI0ZmFhOYVRKJM=: 00:27:05.288 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:05.288 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:05.288 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:05.288 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzMyOTQyMDgxZWI3NmQ3ZjYwOWI4YjAzOTM2N2M3OTJkY2I5NDM0NDE1NTZjYWQ2MGZkNjQzZWQxZGI0ZmFhOYVRKJM=: 00:27:05.288 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:05.288 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:05.288 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.288 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:05.288 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:05.288 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:05.288 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.288 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:05.288 01:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.288 01:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.288 01:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.288 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.288 01:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:05.288 01:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:05.288 01:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:05.288 01:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.288 01:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.288 01:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:05.288 01:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.288 01:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:05.288 01:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:05.288 01:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:05.288 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:05.288 01:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.288 01:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.547 nvme0n1 00:27:05.547 01:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.547 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.547 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.547 01:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.547 01:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.547 01:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.547 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.547 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.547 01:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.547 01:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.547 01:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.547 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:05.548 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.548 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:05.548 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.548 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:05.548 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:05.548 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:05.548 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTg3NGE4NzQ3NjgwMzk5MmZkZTI0NzVhZmFhZDZhZDnvaRoE: 00:27:05.548 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU2ZjBkZjRiNTI0ODE0NDI0YWQwMTI4NjI1ZGU1NGMyODIwZGNkZjRhN2RkNzY2ZjFmN2ZmZjI0OGY1OTI3ZsNxQrE=: 00:27:05.548 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:05.548 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:05.548 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTg3NGE4NzQ3NjgwMzk5MmZkZTI0NzVhZmFhZDZhZDnvaRoE: 00:27:05.548 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU2ZjBkZjRiNTI0ODE0NDI0YWQwMTI4NjI1ZGU1NGMyODIwZGNkZjRhN2RkNzY2ZjFmN2ZmZjI0OGY1OTI3ZsNxQrE=: ]] 00:27:05.548 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU2ZjBkZjRiNTI0ODE0NDI0YWQwMTI4NjI1ZGU1NGMyODIwZGNkZjRhN2RkNzY2ZjFmN2ZmZjI0OGY1OTI3ZsNxQrE=: 00:27:05.548 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:05.548 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.548 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:05.548 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:05.548 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:05.548 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.548 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:05.548 01:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.548 01:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.548 01:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.548 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.548 01:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:05.548 01:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:05.548 01:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:05.548 01:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.548 01:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.548 01:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:05.548 01:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.548 01:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:05.548 01:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:05.548 01:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:05.548 01:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:05.548 01:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.548 01:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.117 nvme0n1 00:27:06.117 01:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.117 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.117 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.117 01:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.117 01:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.117 01:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.117 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.117 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.117 01:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.117 01:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.118 01:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.118 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.118 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:06.118 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.118 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:06.118 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:06.118 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:06.118 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjE4NzVkNWFhYmZiZWYyOTczNDJkNjMxMmJhZGRjZGMwZDc4NGU2NWRkYTI3MTc2/13YOg==: 00:27:06.118 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: 00:27:06.118 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:06.118 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:06.118 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjE4NzVkNWFhYmZiZWYyOTczNDJkNjMxMmJhZGRjZGMwZDc4NGU2NWRkYTI3MTc2/13YOg==: 00:27:06.118 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: ]] 00:27:06.118 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: 00:27:06.118 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:06.118 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.118 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:06.118 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:06.118 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:06.118 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.118 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:06.118 01:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.118 01:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.118 01:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.118 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.118 01:27:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:06.118 01:27:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:06.118 01:27:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:06.118 01:27:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.118 01:27:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.118 01:27:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:06.118 01:27:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.118 01:27:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:06.118 01:27:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:06.118 01:27:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:06.118 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:06.118 01:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.118 01:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.378 nvme0n1 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjVjY2RjM2ZlYWJlZmE0NDE0NGVmNWQ1OWUxZTBmNTOgpRPI: 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTA3OTViNmUxMGU2ZjkyYTNhMDhiYzVjNWIwODU0YWRAglV+: 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjVjY2RjM2ZlYWJlZmE0NDE0NGVmNWQ1OWUxZTBmNTOgpRPI: 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTA3OTViNmUxMGU2ZjkyYTNhMDhiYzVjNWIwODU0YWRAglV+: ]] 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTA3OTViNmUxMGU2ZjkyYTNhMDhiYzVjNWIwODU0YWRAglV+: 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.378 01:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.947 nvme0n1 00:27:06.947 01:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWNiMmM5NzA1ZjNiYWUzMGRjOGJlNjM1Mzk3YmE4ZjM1MGEyZjQxYWEzMGY2N2E4cni5Qg==: 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjcwNzgzNjJlYWJmMzA0NTg3ZWQ1OTcyN2I2ODM4MWNnMXAr: 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWNiMmM5NzA1ZjNiYWUzMGRjOGJlNjM1Mzk3YmE4ZjM1MGEyZjQxYWEzMGY2N2E4cni5Qg==: 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjcwNzgzNjJlYWJmMzA0NTg3ZWQ1OTcyN2I2ODM4MWNnMXAr: ]] 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjcwNzgzNjJlYWJmMzA0NTg3ZWQ1OTcyN2I2ODM4MWNnMXAr: 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.948 01:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.208 nvme0n1 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzMyOTQyMDgxZWI3NmQ3ZjYwOWI4YjAzOTM2N2M3OTJkY2I5NDM0NDE1NTZjYWQ2MGZkNjQzZWQxZGI0ZmFhOYVRKJM=: 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzMyOTQyMDgxZWI3NmQ3ZjYwOWI4YjAzOTM2N2M3OTJkY2I5NDM0NDE1NTZjYWQ2MGZkNjQzZWQxZGI0ZmFhOYVRKJM=: 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.208 01:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.778 nvme0n1 00:27:07.778 01:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.778 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.778 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.778 01:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.778 01:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.778 01:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.778 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.778 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.778 01:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.778 01:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.778 01:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.778 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:07.779 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.779 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:07.779 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.779 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:07.779 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:07.779 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:07.779 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTg3NGE4NzQ3NjgwMzk5MmZkZTI0NzVhZmFhZDZhZDnvaRoE: 00:27:07.779 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU2ZjBkZjRiNTI0ODE0NDI0YWQwMTI4NjI1ZGU1NGMyODIwZGNkZjRhN2RkNzY2ZjFmN2ZmZjI0OGY1OTI3ZsNxQrE=: 00:27:07.779 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:07.779 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:07.779 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTg3NGE4NzQ3NjgwMzk5MmZkZTI0NzVhZmFhZDZhZDnvaRoE: 00:27:07.779 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU2ZjBkZjRiNTI0ODE0NDI0YWQwMTI4NjI1ZGU1NGMyODIwZGNkZjRhN2RkNzY2ZjFmN2ZmZjI0OGY1OTI3ZsNxQrE=: ]] 00:27:07.779 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU2ZjBkZjRiNTI0ODE0NDI0YWQwMTI4NjI1ZGU1NGMyODIwZGNkZjRhN2RkNzY2ZjFmN2ZmZjI0OGY1OTI3ZsNxQrE=: 00:27:07.779 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:07.779 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.779 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:07.779 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:07.779 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:07.779 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.779 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:07.779 01:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.779 01:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.779 01:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.779 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.779 01:27:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:07.779 01:27:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:07.779 01:27:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:07.779 01:27:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.779 01:27:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.779 01:27:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:07.779 01:27:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.779 01:27:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:07.779 01:27:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:07.779 01:27:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:07.779 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:07.779 01:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.779 01:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.349 nvme0n1 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjE4NzVkNWFhYmZiZWYyOTczNDJkNjMxMmJhZGRjZGMwZDc4NGU2NWRkYTI3MTc2/13YOg==: 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjE4NzVkNWFhYmZiZWYyOTczNDJkNjMxMmJhZGRjZGMwZDc4NGU2NWRkYTI3MTc2/13YOg==: 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: ]] 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.349 01:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.917 nvme0n1 00:27:08.917 01:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.917 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.917 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.917 01:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.917 01:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.917 01:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.917 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.917 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.917 01:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.917 01:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.917 01:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.917 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.917 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:08.917 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.917 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:08.917 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:08.917 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:08.917 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjVjY2RjM2ZlYWJlZmE0NDE0NGVmNWQ1OWUxZTBmNTOgpRPI: 00:27:08.917 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTA3OTViNmUxMGU2ZjkyYTNhMDhiYzVjNWIwODU0YWRAglV+: 00:27:08.917 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:08.917 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:08.917 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjVjY2RjM2ZlYWJlZmE0NDE0NGVmNWQ1OWUxZTBmNTOgpRPI: 00:27:08.917 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTA3OTViNmUxMGU2ZjkyYTNhMDhiYzVjNWIwODU0YWRAglV+: ]] 00:27:08.918 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTA3OTViNmUxMGU2ZjkyYTNhMDhiYzVjNWIwODU0YWRAglV+: 00:27:08.918 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:08.918 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.918 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:08.918 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:08.918 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:08.918 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.918 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:08.918 01:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.918 01:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.918 01:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.918 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.918 01:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:08.918 01:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:08.918 01:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:08.918 01:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.918 01:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.918 01:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:08.918 01:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.918 01:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:08.918 01:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:08.918 01:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:08.918 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:08.918 01:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.918 01:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.486 nvme0n1 00:27:09.486 01:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.486 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.486 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.486 01:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.486 01:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.486 01:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.486 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.486 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.486 01:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.486 01:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.748 01:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.748 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.748 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:09.748 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.748 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:09.748 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:09.748 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:09.748 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWNiMmM5NzA1ZjNiYWUzMGRjOGJlNjM1Mzk3YmE4ZjM1MGEyZjQxYWEzMGY2N2E4cni5Qg==: 00:27:09.748 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjcwNzgzNjJlYWJmMzA0NTg3ZWQ1OTcyN2I2ODM4MWNnMXAr: 00:27:09.748 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:09.748 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:09.748 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWNiMmM5NzA1ZjNiYWUzMGRjOGJlNjM1Mzk3YmE4ZjM1MGEyZjQxYWEzMGY2N2E4cni5Qg==: 00:27:09.748 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjcwNzgzNjJlYWJmMzA0NTg3ZWQ1OTcyN2I2ODM4MWNnMXAr: ]] 00:27:09.748 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjcwNzgzNjJlYWJmMzA0NTg3ZWQ1OTcyN2I2ODM4MWNnMXAr: 00:27:09.748 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:09.748 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.748 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:09.748 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:09.748 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:09.748 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.748 01:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:09.748 01:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.748 01:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.748 01:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.748 01:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.748 01:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:09.748 01:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:09.748 01:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:09.748 01:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.748 01:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.748 01:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:09.748 01:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.748 01:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:09.748 01:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:09.748 01:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:09.748 01:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:09.748 01:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.748 01:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.348 nvme0n1 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzMyOTQyMDgxZWI3NmQ3ZjYwOWI4YjAzOTM2N2M3OTJkY2I5NDM0NDE1NTZjYWQ2MGZkNjQzZWQxZGI0ZmFhOYVRKJM=: 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzMyOTQyMDgxZWI3NmQ3ZjYwOWI4YjAzOTM2N2M3OTJkY2I5NDM0NDE1NTZjYWQ2MGZkNjQzZWQxZGI0ZmFhOYVRKJM=: 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.348 01:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.916 nvme0n1 00:27:10.916 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.916 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.916 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.916 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.916 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.916 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.916 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.916 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.916 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.916 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.916 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.916 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:10.916 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:10.916 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.916 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:10.916 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.916 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:10.916 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:10.916 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:10.917 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTg3NGE4NzQ3NjgwMzk5MmZkZTI0NzVhZmFhZDZhZDnvaRoE: 00:27:10.917 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU2ZjBkZjRiNTI0ODE0NDI0YWQwMTI4NjI1ZGU1NGMyODIwZGNkZjRhN2RkNzY2ZjFmN2ZmZjI0OGY1OTI3ZsNxQrE=: 00:27:10.917 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:10.917 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:10.917 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTg3NGE4NzQ3NjgwMzk5MmZkZTI0NzVhZmFhZDZhZDnvaRoE: 00:27:10.917 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU2ZjBkZjRiNTI0ODE0NDI0YWQwMTI4NjI1ZGU1NGMyODIwZGNkZjRhN2RkNzY2ZjFmN2ZmZjI0OGY1OTI3ZsNxQrE=: ]] 00:27:10.917 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU2ZjBkZjRiNTI0ODE0NDI0YWQwMTI4NjI1ZGU1NGMyODIwZGNkZjRhN2RkNzY2ZjFmN2ZmZjI0OGY1OTI3ZsNxQrE=: 00:27:10.917 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:10.917 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.917 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:10.917 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:10.917 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:10.917 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.917 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:10.917 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.917 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.917 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.917 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.917 01:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:10.917 01:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:10.917 01:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:10.917 01:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.917 01:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.917 01:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:10.917 01:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.917 01:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:10.917 01:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:10.917 01:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:10.917 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:10.917 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.917 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.175 nvme0n1 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjE4NzVkNWFhYmZiZWYyOTczNDJkNjMxMmJhZGRjZGMwZDc4NGU2NWRkYTI3MTc2/13YOg==: 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjE4NzVkNWFhYmZiZWYyOTczNDJkNjMxMmJhZGRjZGMwZDc4NGU2NWRkYTI3MTc2/13YOg==: 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: ]] 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.175 nvme0n1 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.175 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.434 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.434 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.434 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:11.434 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.434 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:11.434 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:11.434 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:11.434 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjVjY2RjM2ZlYWJlZmE0NDE0NGVmNWQ1OWUxZTBmNTOgpRPI: 00:27:11.434 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTA3OTViNmUxMGU2ZjkyYTNhMDhiYzVjNWIwODU0YWRAglV+: 00:27:11.434 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:11.434 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:11.434 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjVjY2RjM2ZlYWJlZmE0NDE0NGVmNWQ1OWUxZTBmNTOgpRPI: 00:27:11.434 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTA3OTViNmUxMGU2ZjkyYTNhMDhiYzVjNWIwODU0YWRAglV+: ]] 00:27:11.434 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTA3OTViNmUxMGU2ZjkyYTNhMDhiYzVjNWIwODU0YWRAglV+: 00:27:11.434 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:11.434 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.434 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:11.434 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:11.434 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:11.434 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.434 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:11.434 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.434 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.434 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.434 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.434 01:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:11.434 01:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:11.434 01:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:11.434 01:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.434 01:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.434 01:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:11.434 01:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.434 01:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:11.434 01:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.435 nvme0n1 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWNiMmM5NzA1ZjNiYWUzMGRjOGJlNjM1Mzk3YmE4ZjM1MGEyZjQxYWEzMGY2N2E4cni5Qg==: 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjcwNzgzNjJlYWJmMzA0NTg3ZWQ1OTcyN2I2ODM4MWNnMXAr: 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWNiMmM5NzA1ZjNiYWUzMGRjOGJlNjM1Mzk3YmE4ZjM1MGEyZjQxYWEzMGY2N2E4cni5Qg==: 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjcwNzgzNjJlYWJmMzA0NTg3ZWQ1OTcyN2I2ODM4MWNnMXAr: ]] 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjcwNzgzNjJlYWJmMzA0NTg3ZWQ1OTcyN2I2ODM4MWNnMXAr: 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.435 01:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.694 nvme0n1 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzMyOTQyMDgxZWI3NmQ3ZjYwOWI4YjAzOTM2N2M3OTJkY2I5NDM0NDE1NTZjYWQ2MGZkNjQzZWQxZGI0ZmFhOYVRKJM=: 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzMyOTQyMDgxZWI3NmQ3ZjYwOWI4YjAzOTM2N2M3OTJkY2I5NDM0NDE1NTZjYWQ2MGZkNjQzZWQxZGI0ZmFhOYVRKJM=: 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.694 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.956 nvme0n1 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTg3NGE4NzQ3NjgwMzk5MmZkZTI0NzVhZmFhZDZhZDnvaRoE: 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU2ZjBkZjRiNTI0ODE0NDI0YWQwMTI4NjI1ZGU1NGMyODIwZGNkZjRhN2RkNzY2ZjFmN2ZmZjI0OGY1OTI3ZsNxQrE=: 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTg3NGE4NzQ3NjgwMzk5MmZkZTI0NzVhZmFhZDZhZDnvaRoE: 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU2ZjBkZjRiNTI0ODE0NDI0YWQwMTI4NjI1ZGU1NGMyODIwZGNkZjRhN2RkNzY2ZjFmN2ZmZjI0OGY1OTI3ZsNxQrE=: ]] 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU2ZjBkZjRiNTI0ODE0NDI0YWQwMTI4NjI1ZGU1NGMyODIwZGNkZjRhN2RkNzY2ZjFmN2ZmZjI0OGY1OTI3ZsNxQrE=: 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.956 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.215 nvme0n1 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjE4NzVkNWFhYmZiZWYyOTczNDJkNjMxMmJhZGRjZGMwZDc4NGU2NWRkYTI3MTc2/13YOg==: 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjE4NzVkNWFhYmZiZWYyOTczNDJkNjMxMmJhZGRjZGMwZDc4NGU2NWRkYTI3MTc2/13YOg==: 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: ]] 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.215 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.474 nvme0n1 00:27:12.474 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.474 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.474 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.474 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.474 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.474 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.474 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.474 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.474 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.474 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.474 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.474 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.474 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:12.474 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.474 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:12.474 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:12.474 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:12.474 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjVjY2RjM2ZlYWJlZmE0NDE0NGVmNWQ1OWUxZTBmNTOgpRPI: 00:27:12.474 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTA3OTViNmUxMGU2ZjkyYTNhMDhiYzVjNWIwODU0YWRAglV+: 00:27:12.474 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:12.474 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:12.474 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjVjY2RjM2ZlYWJlZmE0NDE0NGVmNWQ1OWUxZTBmNTOgpRPI: 00:27:12.474 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTA3OTViNmUxMGU2ZjkyYTNhMDhiYzVjNWIwODU0YWRAglV+: ]] 00:27:12.474 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTA3OTViNmUxMGU2ZjkyYTNhMDhiYzVjNWIwODU0YWRAglV+: 00:27:12.474 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:12.474 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.474 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:12.474 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:12.474 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:12.475 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.475 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:12.475 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.475 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.475 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.475 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.475 01:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:12.475 01:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:12.475 01:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:12.475 01:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.475 01:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.475 01:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:12.475 01:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.475 01:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:12.475 01:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:12.475 01:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:12.475 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:12.475 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.475 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.475 nvme0n1 00:27:12.475 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.475 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.475 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.475 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.475 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.475 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.734 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.734 01:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.734 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.734 01:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.734 01:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.734 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.734 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:12.734 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.734 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:12.734 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:12.734 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:12.734 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWNiMmM5NzA1ZjNiYWUzMGRjOGJlNjM1Mzk3YmE4ZjM1MGEyZjQxYWEzMGY2N2E4cni5Qg==: 00:27:12.734 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjcwNzgzNjJlYWJmMzA0NTg3ZWQ1OTcyN2I2ODM4MWNnMXAr: 00:27:12.734 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:12.734 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:12.734 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWNiMmM5NzA1ZjNiYWUzMGRjOGJlNjM1Mzk3YmE4ZjM1MGEyZjQxYWEzMGY2N2E4cni5Qg==: 00:27:12.734 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjcwNzgzNjJlYWJmMzA0NTg3ZWQ1OTcyN2I2ODM4MWNnMXAr: ]] 00:27:12.734 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjcwNzgzNjJlYWJmMzA0NTg3ZWQ1OTcyN2I2ODM4MWNnMXAr: 00:27:12.734 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:12.734 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.734 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:12.734 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:12.734 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:12.734 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.734 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:12.734 01:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.734 01:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.734 01:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.734 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.734 01:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:12.734 01:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:12.734 01:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:12.734 01:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.734 01:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.734 01:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:12.734 01:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.734 01:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:12.734 01:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:12.734 01:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:12.734 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:12.734 01:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.734 01:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.734 nvme0n1 00:27:12.734 01:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.734 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.734 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.734 01:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.734 01:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.734 01:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzMyOTQyMDgxZWI3NmQ3ZjYwOWI4YjAzOTM2N2M3OTJkY2I5NDM0NDE1NTZjYWQ2MGZkNjQzZWQxZGI0ZmFhOYVRKJM=: 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzMyOTQyMDgxZWI3NmQ3ZjYwOWI4YjAzOTM2N2M3OTJkY2I5NDM0NDE1NTZjYWQ2MGZkNjQzZWQxZGI0ZmFhOYVRKJM=: 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.993 nvme0n1 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.993 01:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.252 01:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.252 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:13.252 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.252 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:13.252 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.252 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:13.252 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:13.252 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:13.252 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTg3NGE4NzQ3NjgwMzk5MmZkZTI0NzVhZmFhZDZhZDnvaRoE: 00:27:13.252 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU2ZjBkZjRiNTI0ODE0NDI0YWQwMTI4NjI1ZGU1NGMyODIwZGNkZjRhN2RkNzY2ZjFmN2ZmZjI0OGY1OTI3ZsNxQrE=: 00:27:13.252 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:13.252 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:13.252 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTg3NGE4NzQ3NjgwMzk5MmZkZTI0NzVhZmFhZDZhZDnvaRoE: 00:27:13.252 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU2ZjBkZjRiNTI0ODE0NDI0YWQwMTI4NjI1ZGU1NGMyODIwZGNkZjRhN2RkNzY2ZjFmN2ZmZjI0OGY1OTI3ZsNxQrE=: ]] 00:27:13.252 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU2ZjBkZjRiNTI0ODE0NDI0YWQwMTI4NjI1ZGU1NGMyODIwZGNkZjRhN2RkNzY2ZjFmN2ZmZjI0OGY1OTI3ZsNxQrE=: 00:27:13.252 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:13.252 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.252 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:13.252 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:13.252 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:13.252 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.252 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:13.252 01:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.252 01:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.252 01:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.252 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.252 01:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:13.252 01:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:13.252 01:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:13.252 01:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.252 01:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.252 01:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:13.252 01:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.252 01:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:13.252 01:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:13.252 01:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:13.252 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:13.252 01:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.252 01:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.252 nvme0n1 00:27:13.252 01:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.252 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.252 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.252 01:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.252 01:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.512 01:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.512 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.512 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.512 01:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.512 01:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.512 01:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.512 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.512 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:13.512 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.512 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:13.512 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:13.512 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:13.512 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjE4NzVkNWFhYmZiZWYyOTczNDJkNjMxMmJhZGRjZGMwZDc4NGU2NWRkYTI3MTc2/13YOg==: 00:27:13.512 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: 00:27:13.512 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:13.512 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:13.512 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjE4NzVkNWFhYmZiZWYyOTczNDJkNjMxMmJhZGRjZGMwZDc4NGU2NWRkYTI3MTc2/13YOg==: 00:27:13.512 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: ]] 00:27:13.512 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: 00:27:13.512 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:13.512 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.512 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:13.512 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:13.512 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:13.512 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.512 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:13.512 01:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.512 01:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.512 01:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.512 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.512 01:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:13.512 01:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:13.512 01:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:13.512 01:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.512 01:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.512 01:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:13.512 01:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.512 01:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:13.512 01:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:13.512 01:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:13.512 01:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:13.512 01:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.512 01:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.771 nvme0n1 00:27:13.771 01:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.771 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.771 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.771 01:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.771 01:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.771 01:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.771 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.771 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.771 01:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.771 01:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.771 01:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.771 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.771 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:13.771 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.771 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:13.771 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:13.771 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:13.771 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjVjY2RjM2ZlYWJlZmE0NDE0NGVmNWQ1OWUxZTBmNTOgpRPI: 00:27:13.771 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTA3OTViNmUxMGU2ZjkyYTNhMDhiYzVjNWIwODU0YWRAglV+: 00:27:13.771 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:13.771 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:13.771 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjVjY2RjM2ZlYWJlZmE0NDE0NGVmNWQ1OWUxZTBmNTOgpRPI: 00:27:13.771 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTA3OTViNmUxMGU2ZjkyYTNhMDhiYzVjNWIwODU0YWRAglV+: ]] 00:27:13.771 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTA3OTViNmUxMGU2ZjkyYTNhMDhiYzVjNWIwODU0YWRAglV+: 00:27:13.771 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:13.771 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.771 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:13.771 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:13.771 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:13.771 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.771 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:13.771 01:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.771 01:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.771 01:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.771 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.771 01:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:13.771 01:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:13.771 01:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:13.771 01:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.771 01:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.771 01:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:13.771 01:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.771 01:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:13.772 01:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:13.772 01:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:13.772 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:13.772 01:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.772 01:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.031 nvme0n1 00:27:14.031 01:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.031 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.031 01:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.031 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.031 01:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.031 01:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.031 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.031 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.031 01:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.031 01:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.031 01:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.031 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.031 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:14.031 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.031 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:14.031 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:14.031 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:14.031 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWNiMmM5NzA1ZjNiYWUzMGRjOGJlNjM1Mzk3YmE4ZjM1MGEyZjQxYWEzMGY2N2E4cni5Qg==: 00:27:14.031 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjcwNzgzNjJlYWJmMzA0NTg3ZWQ1OTcyN2I2ODM4MWNnMXAr: 00:27:14.031 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:14.031 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:14.031 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWNiMmM5NzA1ZjNiYWUzMGRjOGJlNjM1Mzk3YmE4ZjM1MGEyZjQxYWEzMGY2N2E4cni5Qg==: 00:27:14.031 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjcwNzgzNjJlYWJmMzA0NTg3ZWQ1OTcyN2I2ODM4MWNnMXAr: ]] 00:27:14.031 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjcwNzgzNjJlYWJmMzA0NTg3ZWQ1OTcyN2I2ODM4MWNnMXAr: 00:27:14.031 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:14.031 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.031 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:14.031 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:14.031 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:14.032 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.032 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:14.032 01:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.032 01:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.032 01:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.032 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.032 01:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:14.032 01:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:14.032 01:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:14.032 01:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.032 01:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.032 01:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:14.032 01:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.032 01:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:14.032 01:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:14.032 01:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:14.032 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:14.032 01:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.032 01:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.291 nvme0n1 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzMyOTQyMDgxZWI3NmQ3ZjYwOWI4YjAzOTM2N2M3OTJkY2I5NDM0NDE1NTZjYWQ2MGZkNjQzZWQxZGI0ZmFhOYVRKJM=: 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzMyOTQyMDgxZWI3NmQ3ZjYwOWI4YjAzOTM2N2M3OTJkY2I5NDM0NDE1NTZjYWQ2MGZkNjQzZWQxZGI0ZmFhOYVRKJM=: 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.291 01:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.550 nvme0n1 00:27:14.550 01:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.550 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.550 01:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.550 01:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.550 01:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.550 01:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.550 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.550 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.550 01:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.550 01:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.809 01:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.809 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:14.809 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.809 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:14.809 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.809 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:14.809 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:14.809 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:14.809 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTg3NGE4NzQ3NjgwMzk5MmZkZTI0NzVhZmFhZDZhZDnvaRoE: 00:27:14.809 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU2ZjBkZjRiNTI0ODE0NDI0YWQwMTI4NjI1ZGU1NGMyODIwZGNkZjRhN2RkNzY2ZjFmN2ZmZjI0OGY1OTI3ZsNxQrE=: 00:27:14.809 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:14.809 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:14.809 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTg3NGE4NzQ3NjgwMzk5MmZkZTI0NzVhZmFhZDZhZDnvaRoE: 00:27:14.809 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU2ZjBkZjRiNTI0ODE0NDI0YWQwMTI4NjI1ZGU1NGMyODIwZGNkZjRhN2RkNzY2ZjFmN2ZmZjI0OGY1OTI3ZsNxQrE=: ]] 00:27:14.809 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU2ZjBkZjRiNTI0ODE0NDI0YWQwMTI4NjI1ZGU1NGMyODIwZGNkZjRhN2RkNzY2ZjFmN2ZmZjI0OGY1OTI3ZsNxQrE=: 00:27:14.809 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:14.809 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.809 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:14.809 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:14.809 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:14.809 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.809 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:14.809 01:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.809 01:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.809 01:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.809 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.809 01:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:14.809 01:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:14.809 01:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:14.809 01:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.809 01:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.809 01:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:14.809 01:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.809 01:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:14.809 01:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:14.809 01:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:14.809 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:14.809 01:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.809 01:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.070 nvme0n1 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjE4NzVkNWFhYmZiZWYyOTczNDJkNjMxMmJhZGRjZGMwZDc4NGU2NWRkYTI3MTc2/13YOg==: 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjE4NzVkNWFhYmZiZWYyOTczNDJkNjMxMmJhZGRjZGMwZDc4NGU2NWRkYTI3MTc2/13YOg==: 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: ]] 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.070 01:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.640 nvme0n1 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjVjY2RjM2ZlYWJlZmE0NDE0NGVmNWQ1OWUxZTBmNTOgpRPI: 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTA3OTViNmUxMGU2ZjkyYTNhMDhiYzVjNWIwODU0YWRAglV+: 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjVjY2RjM2ZlYWJlZmE0NDE0NGVmNWQ1OWUxZTBmNTOgpRPI: 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTA3OTViNmUxMGU2ZjkyYTNhMDhiYzVjNWIwODU0YWRAglV+: ]] 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTA3OTViNmUxMGU2ZjkyYTNhMDhiYzVjNWIwODU0YWRAglV+: 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.640 01:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.900 nvme0n1 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWNiMmM5NzA1ZjNiYWUzMGRjOGJlNjM1Mzk3YmE4ZjM1MGEyZjQxYWEzMGY2N2E4cni5Qg==: 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjcwNzgzNjJlYWJmMzA0NTg3ZWQ1OTcyN2I2ODM4MWNnMXAr: 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWNiMmM5NzA1ZjNiYWUzMGRjOGJlNjM1Mzk3YmE4ZjM1MGEyZjQxYWEzMGY2N2E4cni5Qg==: 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjcwNzgzNjJlYWJmMzA0NTg3ZWQ1OTcyN2I2ODM4MWNnMXAr: ]] 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjcwNzgzNjJlYWJmMzA0NTg3ZWQ1OTcyN2I2ODM4MWNnMXAr: 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.900 01:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.470 nvme0n1 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzMyOTQyMDgxZWI3NmQ3ZjYwOWI4YjAzOTM2N2M3OTJkY2I5NDM0NDE1NTZjYWQ2MGZkNjQzZWQxZGI0ZmFhOYVRKJM=: 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzMyOTQyMDgxZWI3NmQ3ZjYwOWI4YjAzOTM2N2M3OTJkY2I5NDM0NDE1NTZjYWQ2MGZkNjQzZWQxZGI0ZmFhOYVRKJM=: 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.470 01:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.733 nvme0n1 00:27:16.733 01:27:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.733 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.733 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.733 01:27:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.733 01:27:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.733 01:27:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.733 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.733 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.733 01:27:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.733 01:27:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.733 01:27:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.733 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:16.733 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.733 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:16.733 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.733 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:16.733 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:16.733 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:16.733 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTg3NGE4NzQ3NjgwMzk5MmZkZTI0NzVhZmFhZDZhZDnvaRoE: 00:27:16.733 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU2ZjBkZjRiNTI0ODE0NDI0YWQwMTI4NjI1ZGU1NGMyODIwZGNkZjRhN2RkNzY2ZjFmN2ZmZjI0OGY1OTI3ZsNxQrE=: 00:27:16.733 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:16.733 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:16.733 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTg3NGE4NzQ3NjgwMzk5MmZkZTI0NzVhZmFhZDZhZDnvaRoE: 00:27:16.733 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU2ZjBkZjRiNTI0ODE0NDI0YWQwMTI4NjI1ZGU1NGMyODIwZGNkZjRhN2RkNzY2ZjFmN2ZmZjI0OGY1OTI3ZsNxQrE=: ]] 00:27:16.733 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU2ZjBkZjRiNTI0ODE0NDI0YWQwMTI4NjI1ZGU1NGMyODIwZGNkZjRhN2RkNzY2ZjFmN2ZmZjI0OGY1OTI3ZsNxQrE=: 00:27:16.733 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:16.733 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.734 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:16.734 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:16.734 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:16.734 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.734 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:16.734 01:27:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.734 01:27:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.734 01:27:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.734 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.734 01:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:16.734 01:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:16.734 01:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:16.734 01:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.734 01:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.734 01:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:16.734 01:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.734 01:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:16.734 01:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:16.734 01:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:16.734 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:16.734 01:27:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.734 01:27:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.304 nvme0n1 00:27:17.304 01:27:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.304 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.304 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.304 01:27:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.304 01:27:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.304 01:27:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.564 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.564 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.564 01:27:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.564 01:27:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.564 01:27:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.564 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.564 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:17.565 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.565 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:17.565 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:17.565 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:17.565 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjE4NzVkNWFhYmZiZWYyOTczNDJkNjMxMmJhZGRjZGMwZDc4NGU2NWRkYTI3MTc2/13YOg==: 00:27:17.565 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: 00:27:17.565 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:17.565 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:17.565 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjE4NzVkNWFhYmZiZWYyOTczNDJkNjMxMmJhZGRjZGMwZDc4NGU2NWRkYTI3MTc2/13YOg==: 00:27:17.565 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: ]] 00:27:17.565 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: 00:27:17.565 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:17.565 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.565 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:17.565 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:17.565 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:17.565 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.565 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:17.565 01:27:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.565 01:27:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.565 01:27:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.565 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.565 01:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:17.565 01:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:17.565 01:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:17.565 01:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.565 01:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.565 01:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:17.565 01:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.565 01:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:17.565 01:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:17.565 01:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:17.565 01:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:17.565 01:27:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.565 01:27:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.135 nvme0n1 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjVjY2RjM2ZlYWJlZmE0NDE0NGVmNWQ1OWUxZTBmNTOgpRPI: 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTA3OTViNmUxMGU2ZjkyYTNhMDhiYzVjNWIwODU0YWRAglV+: 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjVjY2RjM2ZlYWJlZmE0NDE0NGVmNWQ1OWUxZTBmNTOgpRPI: 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTA3OTViNmUxMGU2ZjkyYTNhMDhiYzVjNWIwODU0YWRAglV+: ]] 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTA3OTViNmUxMGU2ZjkyYTNhMDhiYzVjNWIwODU0YWRAglV+: 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.135 01:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.705 nvme0n1 00:27:18.705 01:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.705 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.705 01:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.705 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.705 01:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.705 01:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.705 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.705 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.705 01:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.705 01:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.705 01:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.705 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.705 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:18.705 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.706 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:18.706 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:18.706 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:18.706 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWNiMmM5NzA1ZjNiYWUzMGRjOGJlNjM1Mzk3YmE4ZjM1MGEyZjQxYWEzMGY2N2E4cni5Qg==: 00:27:18.706 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjcwNzgzNjJlYWJmMzA0NTg3ZWQ1OTcyN2I2ODM4MWNnMXAr: 00:27:18.706 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:18.706 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:18.706 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWNiMmM5NzA1ZjNiYWUzMGRjOGJlNjM1Mzk3YmE4ZjM1MGEyZjQxYWEzMGY2N2E4cni5Qg==: 00:27:18.706 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjcwNzgzNjJlYWJmMzA0NTg3ZWQ1OTcyN2I2ODM4MWNnMXAr: ]] 00:27:18.706 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjcwNzgzNjJlYWJmMzA0NTg3ZWQ1OTcyN2I2ODM4MWNnMXAr: 00:27:18.706 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:18.706 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.706 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:18.706 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:18.706 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:18.706 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.706 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:18.706 01:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.706 01:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.706 01:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.706 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.706 01:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.706 01:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.706 01:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.706 01:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.706 01:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.706 01:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.706 01:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.706 01:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.706 01:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.706 01:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.706 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:18.706 01:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.706 01:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.274 nvme0n1 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzMyOTQyMDgxZWI3NmQ3ZjYwOWI4YjAzOTM2N2M3OTJkY2I5NDM0NDE1NTZjYWQ2MGZkNjQzZWQxZGI0ZmFhOYVRKJM=: 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzMyOTQyMDgxZWI3NmQ3ZjYwOWI4YjAzOTM2N2M3OTJkY2I5NDM0NDE1NTZjYWQ2MGZkNjQzZWQxZGI0ZmFhOYVRKJM=: 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.274 01:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.843 nvme0n1 00:27:19.843 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.843 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.843 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.843 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.843 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.843 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTg3NGE4NzQ3NjgwMzk5MmZkZTI0NzVhZmFhZDZhZDnvaRoE: 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU2ZjBkZjRiNTI0ODE0NDI0YWQwMTI4NjI1ZGU1NGMyODIwZGNkZjRhN2RkNzY2ZjFmN2ZmZjI0OGY1OTI3ZsNxQrE=: 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTg3NGE4NzQ3NjgwMzk5MmZkZTI0NzVhZmFhZDZhZDnvaRoE: 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU2ZjBkZjRiNTI0ODE0NDI0YWQwMTI4NjI1ZGU1NGMyODIwZGNkZjRhN2RkNzY2ZjFmN2ZmZjI0OGY1OTI3ZsNxQrE=: ]] 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU2ZjBkZjRiNTI0ODE0NDI0YWQwMTI4NjI1ZGU1NGMyODIwZGNkZjRhN2RkNzY2ZjFmN2ZmZjI0OGY1OTI3ZsNxQrE=: 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.104 nvme0n1 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjE4NzVkNWFhYmZiZWYyOTczNDJkNjMxMmJhZGRjZGMwZDc4NGU2NWRkYTI3MTc2/13YOg==: 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjE4NzVkNWFhYmZiZWYyOTczNDJkNjMxMmJhZGRjZGMwZDc4NGU2NWRkYTI3MTc2/13YOg==: 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: ]] 00:27:20.104 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: 00:27:20.105 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:20.105 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.105 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:20.105 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:20.105 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:20.105 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.105 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:20.105 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.105 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.105 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.105 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.105 01:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:20.105 01:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:20.105 01:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:20.105 01:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.365 nvme0n1 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjVjY2RjM2ZlYWJlZmE0NDE0NGVmNWQ1OWUxZTBmNTOgpRPI: 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTA3OTViNmUxMGU2ZjkyYTNhMDhiYzVjNWIwODU0YWRAglV+: 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjVjY2RjM2ZlYWJlZmE0NDE0NGVmNWQ1OWUxZTBmNTOgpRPI: 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTA3OTViNmUxMGU2ZjkyYTNhMDhiYzVjNWIwODU0YWRAglV+: ]] 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTA3OTViNmUxMGU2ZjkyYTNhMDhiYzVjNWIwODU0YWRAglV+: 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.365 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.625 nvme0n1 00:27:20.625 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.625 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.626 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.626 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.626 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.626 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.626 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.626 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.626 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.626 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.626 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.626 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.626 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:20.626 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.626 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:20.626 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:20.626 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:20.626 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWNiMmM5NzA1ZjNiYWUzMGRjOGJlNjM1Mzk3YmE4ZjM1MGEyZjQxYWEzMGY2N2E4cni5Qg==: 00:27:20.626 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjcwNzgzNjJlYWJmMzA0NTg3ZWQ1OTcyN2I2ODM4MWNnMXAr: 00:27:20.626 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:20.626 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:20.626 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWNiMmM5NzA1ZjNiYWUzMGRjOGJlNjM1Mzk3YmE4ZjM1MGEyZjQxYWEzMGY2N2E4cni5Qg==: 00:27:20.626 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjcwNzgzNjJlYWJmMzA0NTg3ZWQ1OTcyN2I2ODM4MWNnMXAr: ]] 00:27:20.626 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjcwNzgzNjJlYWJmMzA0NTg3ZWQ1OTcyN2I2ODM4MWNnMXAr: 00:27:20.626 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:20.626 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.626 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:20.626 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:20.626 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:20.626 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.626 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:20.626 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.626 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.626 01:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.626 01:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.626 01:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:20.626 01:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:20.626 01:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:20.626 01:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.626 01:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.626 01:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:20.626 01:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.626 01:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:20.626 01:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:20.626 01:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:20.626 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:20.626 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.626 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.886 nvme0n1 00:27:20.886 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.886 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.886 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.886 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.886 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.886 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.886 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.886 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.886 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.886 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.886 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.886 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.886 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:20.886 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.886 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:20.886 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:20.886 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:20.886 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzMyOTQyMDgxZWI3NmQ3ZjYwOWI4YjAzOTM2N2M3OTJkY2I5NDM0NDE1NTZjYWQ2MGZkNjQzZWQxZGI0ZmFhOYVRKJM=: 00:27:20.886 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:20.886 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:20.886 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:20.886 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzMyOTQyMDgxZWI3NmQ3ZjYwOWI4YjAzOTM2N2M3OTJkY2I5NDM0NDE1NTZjYWQ2MGZkNjQzZWQxZGI0ZmFhOYVRKJM=: 00:27:20.886 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:20.886 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:20.886 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.886 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:20.886 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:20.886 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:20.886 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.886 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:20.886 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.886 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.886 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.886 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.886 01:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:20.886 01:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:20.886 01:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:20.886 01:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.886 01:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.886 01:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:20.886 01:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.886 01:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:20.886 01:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:20.886 01:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:20.887 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:20.887 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.887 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.887 nvme0n1 00:27:20.887 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.887 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.887 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.887 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.887 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.887 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.887 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTg3NGE4NzQ3NjgwMzk5MmZkZTI0NzVhZmFhZDZhZDnvaRoE: 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU2ZjBkZjRiNTI0ODE0NDI0YWQwMTI4NjI1ZGU1NGMyODIwZGNkZjRhN2RkNzY2ZjFmN2ZmZjI0OGY1OTI3ZsNxQrE=: 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTg3NGE4NzQ3NjgwMzk5MmZkZTI0NzVhZmFhZDZhZDnvaRoE: 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU2ZjBkZjRiNTI0ODE0NDI0YWQwMTI4NjI1ZGU1NGMyODIwZGNkZjRhN2RkNzY2ZjFmN2ZmZjI0OGY1OTI3ZsNxQrE=: ]] 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU2ZjBkZjRiNTI0ODE0NDI0YWQwMTI4NjI1ZGU1NGMyODIwZGNkZjRhN2RkNzY2ZjFmN2ZmZjI0OGY1OTI3ZsNxQrE=: 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.145 nvme0n1 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjE4NzVkNWFhYmZiZWYyOTczNDJkNjMxMmJhZGRjZGMwZDc4NGU2NWRkYTI3MTc2/13YOg==: 00:27:21.145 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjE4NzVkNWFhYmZiZWYyOTczNDJkNjMxMmJhZGRjZGMwZDc4NGU2NWRkYTI3MTc2/13YOg==: 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: ]] 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.405 nvme0n1 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjVjY2RjM2ZlYWJlZmE0NDE0NGVmNWQ1OWUxZTBmNTOgpRPI: 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTA3OTViNmUxMGU2ZjkyYTNhMDhiYzVjNWIwODU0YWRAglV+: 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjVjY2RjM2ZlYWJlZmE0NDE0NGVmNWQ1OWUxZTBmNTOgpRPI: 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTA3OTViNmUxMGU2ZjkyYTNhMDhiYzVjNWIwODU0YWRAglV+: ]] 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTA3OTViNmUxMGU2ZjkyYTNhMDhiYzVjNWIwODU0YWRAglV+: 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.405 01:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.664 nvme0n1 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWNiMmM5NzA1ZjNiYWUzMGRjOGJlNjM1Mzk3YmE4ZjM1MGEyZjQxYWEzMGY2N2E4cni5Qg==: 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjcwNzgzNjJlYWJmMzA0NTg3ZWQ1OTcyN2I2ODM4MWNnMXAr: 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWNiMmM5NzA1ZjNiYWUzMGRjOGJlNjM1Mzk3YmE4ZjM1MGEyZjQxYWEzMGY2N2E4cni5Qg==: 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjcwNzgzNjJlYWJmMzA0NTg3ZWQ1OTcyN2I2ODM4MWNnMXAr: ]] 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjcwNzgzNjJlYWJmMzA0NTg3ZWQ1OTcyN2I2ODM4MWNnMXAr: 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.664 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.923 nvme0n1 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzMyOTQyMDgxZWI3NmQ3ZjYwOWI4YjAzOTM2N2M3OTJkY2I5NDM0NDE1NTZjYWQ2MGZkNjQzZWQxZGI0ZmFhOYVRKJM=: 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzMyOTQyMDgxZWI3NmQ3ZjYwOWI4YjAzOTM2N2M3OTJkY2I5NDM0NDE1NTZjYWQ2MGZkNjQzZWQxZGI0ZmFhOYVRKJM=: 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.923 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.183 nvme0n1 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTg3NGE4NzQ3NjgwMzk5MmZkZTI0NzVhZmFhZDZhZDnvaRoE: 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU2ZjBkZjRiNTI0ODE0NDI0YWQwMTI4NjI1ZGU1NGMyODIwZGNkZjRhN2RkNzY2ZjFmN2ZmZjI0OGY1OTI3ZsNxQrE=: 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTg3NGE4NzQ3NjgwMzk5MmZkZTI0NzVhZmFhZDZhZDnvaRoE: 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU2ZjBkZjRiNTI0ODE0NDI0YWQwMTI4NjI1ZGU1NGMyODIwZGNkZjRhN2RkNzY2ZjFmN2ZmZjI0OGY1OTI3ZsNxQrE=: ]] 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU2ZjBkZjRiNTI0ODE0NDI0YWQwMTI4NjI1ZGU1NGMyODIwZGNkZjRhN2RkNzY2ZjFmN2ZmZjI0OGY1OTI3ZsNxQrE=: 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.183 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.443 nvme0n1 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjE4NzVkNWFhYmZiZWYyOTczNDJkNjMxMmJhZGRjZGMwZDc4NGU2NWRkYTI3MTc2/13YOg==: 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjE4NzVkNWFhYmZiZWYyOTczNDJkNjMxMmJhZGRjZGMwZDc4NGU2NWRkYTI3MTc2/13YOg==: 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: ]] 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.443 01:27:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.702 nvme0n1 00:27:22.702 01:27:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.702 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.702 01:27:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.702 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.702 01:27:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.702 01:27:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.702 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.702 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.702 01:27:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.702 01:27:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.702 01:27:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.702 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.702 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:22.702 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.702 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:22.702 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:22.702 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:22.702 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjVjY2RjM2ZlYWJlZmE0NDE0NGVmNWQ1OWUxZTBmNTOgpRPI: 00:27:22.702 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTA3OTViNmUxMGU2ZjkyYTNhMDhiYzVjNWIwODU0YWRAglV+: 00:27:22.702 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:22.702 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:22.702 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjVjY2RjM2ZlYWJlZmE0NDE0NGVmNWQ1OWUxZTBmNTOgpRPI: 00:27:22.702 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTA3OTViNmUxMGU2ZjkyYTNhMDhiYzVjNWIwODU0YWRAglV+: ]] 00:27:22.702 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTA3OTViNmUxMGU2ZjkyYTNhMDhiYzVjNWIwODU0YWRAglV+: 00:27:22.702 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:22.702 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.702 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:22.702 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:22.702 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:22.702 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.702 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:22.702 01:27:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.961 01:27:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.961 01:27:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.961 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.961 01:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:22.961 01:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:22.961 01:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:22.961 01:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.961 01:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.961 01:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:22.961 01:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.961 01:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:22.961 01:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:22.961 01:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:22.961 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:22.961 01:27:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.961 01:27:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.961 nvme0n1 00:27:22.961 01:27:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.961 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.961 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.961 01:27:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.961 01:27:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.961 01:27:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.220 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.220 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.220 01:27:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.220 01:27:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.220 01:27:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.220 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.220 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:23.220 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.220 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:23.220 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:23.220 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:23.220 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWNiMmM5NzA1ZjNiYWUzMGRjOGJlNjM1Mzk3YmE4ZjM1MGEyZjQxYWEzMGY2N2E4cni5Qg==: 00:27:23.220 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjcwNzgzNjJlYWJmMzA0NTg3ZWQ1OTcyN2I2ODM4MWNnMXAr: 00:27:23.220 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:23.220 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:23.220 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWNiMmM5NzA1ZjNiYWUzMGRjOGJlNjM1Mzk3YmE4ZjM1MGEyZjQxYWEzMGY2N2E4cni5Qg==: 00:27:23.220 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjcwNzgzNjJlYWJmMzA0NTg3ZWQ1OTcyN2I2ODM4MWNnMXAr: ]] 00:27:23.220 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjcwNzgzNjJlYWJmMzA0NTg3ZWQ1OTcyN2I2ODM4MWNnMXAr: 00:27:23.221 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:23.221 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.221 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:23.221 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:23.221 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:23.221 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.221 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:23.221 01:27:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.221 01:27:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.221 01:27:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.221 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.221 01:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:23.221 01:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:23.221 01:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:23.221 01:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.221 01:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.221 01:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:23.221 01:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.221 01:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:23.221 01:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:23.221 01:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:23.221 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:23.221 01:27:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.221 01:27:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.481 nvme0n1 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzMyOTQyMDgxZWI3NmQ3ZjYwOWI4YjAzOTM2N2M3OTJkY2I5NDM0NDE1NTZjYWQ2MGZkNjQzZWQxZGI0ZmFhOYVRKJM=: 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzMyOTQyMDgxZWI3NmQ3ZjYwOWI4YjAzOTM2N2M3OTJkY2I5NDM0NDE1NTZjYWQ2MGZkNjQzZWQxZGI0ZmFhOYVRKJM=: 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.481 01:27:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.742 nvme0n1 00:27:23.742 01:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.742 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.742 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.742 01:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.742 01:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.742 01:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.742 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.742 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.742 01:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.742 01:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.742 01:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.742 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:23.742 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.742 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:23.742 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.742 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:23.742 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:23.742 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:23.742 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTg3NGE4NzQ3NjgwMzk5MmZkZTI0NzVhZmFhZDZhZDnvaRoE: 00:27:23.742 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU2ZjBkZjRiNTI0ODE0NDI0YWQwMTI4NjI1ZGU1NGMyODIwZGNkZjRhN2RkNzY2ZjFmN2ZmZjI0OGY1OTI3ZsNxQrE=: 00:27:23.742 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:23.742 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:23.742 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTg3NGE4NzQ3NjgwMzk5MmZkZTI0NzVhZmFhZDZhZDnvaRoE: 00:27:23.742 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU2ZjBkZjRiNTI0ODE0NDI0YWQwMTI4NjI1ZGU1NGMyODIwZGNkZjRhN2RkNzY2ZjFmN2ZmZjI0OGY1OTI3ZsNxQrE=: ]] 00:27:23.742 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU2ZjBkZjRiNTI0ODE0NDI0YWQwMTI4NjI1ZGU1NGMyODIwZGNkZjRhN2RkNzY2ZjFmN2ZmZjI0OGY1OTI3ZsNxQrE=: 00:27:23.742 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:23.742 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.742 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:23.742 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:23.742 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:23.742 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.742 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:23.742 01:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.742 01:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.742 01:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.742 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.742 01:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:23.742 01:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:23.743 01:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:23.743 01:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.743 01:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.743 01:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:23.743 01:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.743 01:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:23.743 01:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:23.743 01:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:23.743 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:23.743 01:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.743 01:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.003 nvme0n1 00:27:24.003 01:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.003 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.003 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.003 01:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.003 01:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.003 01:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.263 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.263 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.263 01:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.263 01:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.263 01:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.263 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.263 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:24.263 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.263 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:24.263 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:24.263 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:24.263 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjE4NzVkNWFhYmZiZWYyOTczNDJkNjMxMmJhZGRjZGMwZDc4NGU2NWRkYTI3MTc2/13YOg==: 00:27:24.263 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: 00:27:24.263 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:24.263 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:24.263 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjE4NzVkNWFhYmZiZWYyOTczNDJkNjMxMmJhZGRjZGMwZDc4NGU2NWRkYTI3MTc2/13YOg==: 00:27:24.263 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: ]] 00:27:24.263 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: 00:27:24.263 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:24.263 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.263 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:24.263 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:24.263 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:24.263 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.263 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:24.263 01:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.263 01:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.263 01:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.263 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.263 01:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:24.263 01:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:24.263 01:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:24.263 01:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.263 01:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.263 01:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:24.263 01:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.263 01:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:24.263 01:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:24.263 01:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:24.263 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:24.263 01:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.263 01:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.524 nvme0n1 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjVjY2RjM2ZlYWJlZmE0NDE0NGVmNWQ1OWUxZTBmNTOgpRPI: 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTA3OTViNmUxMGU2ZjkyYTNhMDhiYzVjNWIwODU0YWRAglV+: 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjVjY2RjM2ZlYWJlZmE0NDE0NGVmNWQ1OWUxZTBmNTOgpRPI: 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTA3OTViNmUxMGU2ZjkyYTNhMDhiYzVjNWIwODU0YWRAglV+: ]] 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTA3OTViNmUxMGU2ZjkyYTNhMDhiYzVjNWIwODU0YWRAglV+: 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.524 01:27:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.095 nvme0n1 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWNiMmM5NzA1ZjNiYWUzMGRjOGJlNjM1Mzk3YmE4ZjM1MGEyZjQxYWEzMGY2N2E4cni5Qg==: 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjcwNzgzNjJlYWJmMzA0NTg3ZWQ1OTcyN2I2ODM4MWNnMXAr: 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWNiMmM5NzA1ZjNiYWUzMGRjOGJlNjM1Mzk3YmE4ZjM1MGEyZjQxYWEzMGY2N2E4cni5Qg==: 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjcwNzgzNjJlYWJmMzA0NTg3ZWQ1OTcyN2I2ODM4MWNnMXAr: ]] 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjcwNzgzNjJlYWJmMzA0NTg3ZWQ1OTcyN2I2ODM4MWNnMXAr: 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.095 01:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.355 nvme0n1 00:27:25.355 01:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.355 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.355 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.355 01:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.355 01:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.355 01:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.355 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.355 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.355 01:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.355 01:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.355 01:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.355 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.355 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:25.355 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.355 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:25.355 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:25.355 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:25.355 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzMyOTQyMDgxZWI3NmQ3ZjYwOWI4YjAzOTM2N2M3OTJkY2I5NDM0NDE1NTZjYWQ2MGZkNjQzZWQxZGI0ZmFhOYVRKJM=: 00:27:25.355 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:25.355 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:25.355 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:25.355 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzMyOTQyMDgxZWI3NmQ3ZjYwOWI4YjAzOTM2N2M3OTJkY2I5NDM0NDE1NTZjYWQ2MGZkNjQzZWQxZGI0ZmFhOYVRKJM=: 00:27:25.355 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:25.355 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:25.355 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.355 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:25.356 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:25.356 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:25.356 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.356 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:25.356 01:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.356 01:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.356 01:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.356 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.356 01:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:25.356 01:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:25.356 01:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:25.356 01:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.356 01:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.356 01:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:25.356 01:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.356 01:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:25.356 01:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:25.356 01:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:25.356 01:27:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:25.356 01:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.356 01:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.926 nvme0n1 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTg3NGE4NzQ3NjgwMzk5MmZkZTI0NzVhZmFhZDZhZDnvaRoE: 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGU2ZjBkZjRiNTI0ODE0NDI0YWQwMTI4NjI1ZGU1NGMyODIwZGNkZjRhN2RkNzY2ZjFmN2ZmZjI0OGY1OTI3ZsNxQrE=: 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTg3NGE4NzQ3NjgwMzk5MmZkZTI0NzVhZmFhZDZhZDnvaRoE: 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGU2ZjBkZjRiNTI0ODE0NDI0YWQwMTI4NjI1ZGU1NGMyODIwZGNkZjRhN2RkNzY2ZjFmN2ZmZjI0OGY1OTI3ZsNxQrE=: ]] 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGU2ZjBkZjRiNTI0ODE0NDI0YWQwMTI4NjI1ZGU1NGMyODIwZGNkZjRhN2RkNzY2ZjFmN2ZmZjI0OGY1OTI3ZsNxQrE=: 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.926 01:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.535 nvme0n1 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjE4NzVkNWFhYmZiZWYyOTczNDJkNjMxMmJhZGRjZGMwZDc4NGU2NWRkYTI3MTc2/13YOg==: 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjE4NzVkNWFhYmZiZWYyOTczNDJkNjMxMmJhZGRjZGMwZDc4NGU2NWRkYTI3MTc2/13YOg==: 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: ]] 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.535 01:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.113 nvme0n1 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjVjY2RjM2ZlYWJlZmE0NDE0NGVmNWQ1OWUxZTBmNTOgpRPI: 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTA3OTViNmUxMGU2ZjkyYTNhMDhiYzVjNWIwODU0YWRAglV+: 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjVjY2RjM2ZlYWJlZmE0NDE0NGVmNWQ1OWUxZTBmNTOgpRPI: 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTA3OTViNmUxMGU2ZjkyYTNhMDhiYzVjNWIwODU0YWRAglV+: ]] 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTA3OTViNmUxMGU2ZjkyYTNhMDhiYzVjNWIwODU0YWRAglV+: 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.113 01:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.684 nvme0n1 00:27:27.684 01:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.684 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.684 01:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.684 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.684 01:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.684 01:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.684 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.684 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.684 01:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.684 01:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.684 01:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.684 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.684 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:27.684 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.684 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:27.684 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:27.684 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:27.684 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWNiMmM5NzA1ZjNiYWUzMGRjOGJlNjM1Mzk3YmE4ZjM1MGEyZjQxYWEzMGY2N2E4cni5Qg==: 00:27:27.684 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjcwNzgzNjJlYWJmMzA0NTg3ZWQ1OTcyN2I2ODM4MWNnMXAr: 00:27:27.684 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:27.684 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:27.684 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWNiMmM5NzA1ZjNiYWUzMGRjOGJlNjM1Mzk3YmE4ZjM1MGEyZjQxYWEzMGY2N2E4cni5Qg==: 00:27:27.684 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjcwNzgzNjJlYWJmMzA0NTg3ZWQ1OTcyN2I2ODM4MWNnMXAr: ]] 00:27:27.684 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjcwNzgzNjJlYWJmMzA0NTg3ZWQ1OTcyN2I2ODM4MWNnMXAr: 00:27:27.684 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:27.684 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.684 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:27.684 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:27.684 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:27.684 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.684 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:27.684 01:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.684 01:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.944 01:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.944 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.944 01:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:27.944 01:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:27.944 01:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:27.944 01:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.944 01:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.944 01:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:27.944 01:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.944 01:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:27.944 01:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:27.944 01:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:27.944 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:27.944 01:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.944 01:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.515 nvme0n1 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzMyOTQyMDgxZWI3NmQ3ZjYwOWI4YjAzOTM2N2M3OTJkY2I5NDM0NDE1NTZjYWQ2MGZkNjQzZWQxZGI0ZmFhOYVRKJM=: 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzMyOTQyMDgxZWI3NmQ3ZjYwOWI4YjAzOTM2N2M3OTJkY2I5NDM0NDE1NTZjYWQ2MGZkNjQzZWQxZGI0ZmFhOYVRKJM=: 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.515 01:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.086 nvme0n1 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjE4NzVkNWFhYmZiZWYyOTczNDJkNjMxMmJhZGRjZGMwZDc4NGU2NWRkYTI3MTc2/13YOg==: 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjE4NzVkNWFhYmZiZWYyOTczNDJkNjMxMmJhZGRjZGMwZDc4NGU2NWRkYTI3MTc2/13YOg==: 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: ]] 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2MyZGQ4OThkMzQ4MDcyY2E0MWRmOWFjYjc1YjQ5OGUzMjMwYWVjNzUyOTYyNTZl6RCJMA==: 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.086 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.086 request: 00:27:29.086 { 00:27:29.087 "name": "nvme0", 00:27:29.087 "trtype": "tcp", 00:27:29.087 "traddr": "10.0.0.1", 00:27:29.087 "adrfam": "ipv4", 00:27:29.087 "trsvcid": "4420", 00:27:29.087 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:29.087 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:29.087 "prchk_reftag": false, 00:27:29.087 "prchk_guard": false, 00:27:29.087 "hdgst": false, 00:27:29.087 "ddgst": false, 00:27:29.087 "method": "bdev_nvme_attach_controller", 00:27:29.087 "req_id": 1 00:27:29.087 } 00:27:29.087 Got JSON-RPC error response 00:27:29.087 response: 00:27:29.087 { 00:27:29.087 "code": -5, 00:27:29.087 "message": "Input/output error" 00:27:29.087 } 00:27:29.087 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:29.087 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:27:29.087 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:29.087 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:29.087 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:29.087 01:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:29.087 01:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.087 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.087 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.087 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.087 01:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:29.087 01:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:29.087 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.087 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.087 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.087 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.087 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.087 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.087 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.087 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.087 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.087 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.087 01:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:29.087 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:27:29.087 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:29.087 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:29.087 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:29.087 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:29.087 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:29.087 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:29.087 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.087 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.087 request: 00:27:29.087 { 00:27:29.087 "name": "nvme0", 00:27:29.087 "trtype": "tcp", 00:27:29.087 "traddr": "10.0.0.1", 00:27:29.087 "adrfam": "ipv4", 00:27:29.087 "trsvcid": "4420", 00:27:29.087 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:29.087 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:29.087 "prchk_reftag": false, 00:27:29.087 "prchk_guard": false, 00:27:29.087 "hdgst": false, 00:27:29.087 "ddgst": false, 00:27:29.087 "dhchap_key": "key2", 00:27:29.087 "method": "bdev_nvme_attach_controller", 00:27:29.087 "req_id": 1 00:27:29.087 } 00:27:29.087 Got JSON-RPC error response 00:27:29.087 response: 00:27:29.087 { 00:27:29.087 "code": -5, 00:27:29.087 "message": "Input/output error" 00:27:29.087 } 00:27:29.087 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:29.087 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:27:29.087 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:29.087 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:29.087 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:29.087 01:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.087 01:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:29.087 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.087 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.348 request: 00:27:29.348 { 00:27:29.348 "name": "nvme0", 00:27:29.348 "trtype": "tcp", 00:27:29.348 "traddr": "10.0.0.1", 00:27:29.348 "adrfam": "ipv4", 00:27:29.348 "trsvcid": "4420", 00:27:29.348 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:29.348 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:29.348 "prchk_reftag": false, 00:27:29.348 "prchk_guard": false, 00:27:29.348 "hdgst": false, 00:27:29.348 "ddgst": false, 00:27:29.348 "dhchap_key": "key1", 00:27:29.348 "dhchap_ctrlr_key": "ckey2", 00:27:29.348 "method": "bdev_nvme_attach_controller", 00:27:29.348 "req_id": 1 00:27:29.348 } 00:27:29.348 Got JSON-RPC error response 00:27:29.348 response: 00:27:29.348 { 00:27:29.348 "code": -5, 00:27:29.348 "message": "Input/output error" 00:27:29.348 } 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:29.348 rmmod nvme_tcp 00:27:29.348 rmmod nvme_fabrics 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1029988 ']' 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1029988 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 1029988 ']' 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 1029988 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1029988 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1029988' 00:27:29.348 killing process with pid 1029988 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 1029988 00:27:29.348 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 1029988 00:27:29.608 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:29.608 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:29.609 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:29.609 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:29.609 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:29.609 01:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:29.609 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:29.609 01:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.519 01:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:31.519 01:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:31.519 01:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:31.519 01:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:31.519 01:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:31.519 01:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:27:31.777 01:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:31.777 01:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:31.777 01:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:31.777 01:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:31.777 01:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:31.777 01:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:31.777 01:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:34.319 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:34.319 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:34.319 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:34.319 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:34.319 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:34.319 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:34.319 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:34.319 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:34.319 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:34.319 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:34.319 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:34.319 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:34.319 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:34.319 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:34.319 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:34.319 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:35.260 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:27:35.260 01:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.cww /tmp/spdk.key-null.ONQ /tmp/spdk.key-sha256.e2L /tmp/spdk.key-sha384.wiV /tmp/spdk.key-sha512.zLH /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:35.260 01:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:37.824 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:27:37.824 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:37.824 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:27:37.824 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:27:37.824 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:27:37.824 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:27:37.824 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:27:37.824 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:27:37.824 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:27:37.824 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:27:37.824 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:27:37.824 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:27:37.824 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:27:37.824 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:27:37.824 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:27:37.824 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:27:37.824 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:27:37.824 00:27:37.824 real 0m47.734s 00:27:37.824 user 0m43.072s 00:27:37.824 sys 0m11.455s 00:27:37.824 01:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:37.824 01:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.824 ************************************ 00:27:37.824 END TEST nvmf_auth_host 00:27:37.824 ************************************ 00:27:37.824 01:28:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:37.824 01:28:00 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:27:37.824 01:28:00 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:37.824 01:28:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:37.824 01:28:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:37.824 01:28:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:37.824 ************************************ 00:27:37.824 START TEST nvmf_digest 00:27:37.824 ************************************ 00:27:37.824 01:28:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:38.085 * Looking for test storage... 00:27:38.085 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:27:38.085 01:28:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:43.360 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:43.360 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:27:43.360 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:43.360 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:43.360 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:43.360 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:43.360 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:43.360 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:27:43.360 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:43.360 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:27:43.360 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:27:43.360 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:27:43.360 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:27:43.360 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:27:43.360 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:43.361 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:43.361 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:43.361 Found net devices under 0000:86:00.0: cvl_0_0 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:43.361 Found net devices under 0000:86:00.1: cvl_0_1 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:43.361 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:43.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:27:43.361 00:27:43.361 --- 10.0.0.2 ping statistics --- 00:27:43.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:43.361 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:43.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:43.361 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:27:43.361 00:27:43.361 --- 10.0.0.1 ping statistics --- 00:27:43.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:43.361 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:43.361 ************************************ 00:27:43.361 START TEST nvmf_digest_clean 00:27:43.361 ************************************ 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:27:43.361 01:28:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:43.622 01:28:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:43.622 01:28:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:43.622 01:28:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:43.622 01:28:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:43.622 01:28:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1042790 00:27:43.622 01:28:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1042790 00:27:43.622 01:28:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:43.622 01:28:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1042790 ']' 00:27:43.622 01:28:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:43.622 01:28:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:43.622 01:28:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:43.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:43.622 01:28:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:43.622 01:28:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:43.622 [2024-07-25 01:28:05.906861] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:27:43.622 [2024-07-25 01:28:05.906902] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:43.622 EAL: No free 2048 kB hugepages reported on node 1 00:27:43.622 [2024-07-25 01:28:05.962644] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.622 [2024-07-25 01:28:06.041819] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:43.622 [2024-07-25 01:28:06.041852] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:43.622 [2024-07-25 01:28:06.041859] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:43.622 [2024-07-25 01:28:06.041867] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:43.622 [2024-07-25 01:28:06.041872] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:43.622 [2024-07-25 01:28:06.041893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:44.563 01:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:44.563 01:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:27:44.563 01:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:44.563 01:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:44.563 01:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:44.563 01:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:44.563 01:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:44.563 01:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:27:44.563 01:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:27:44.563 01:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.563 01:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:44.563 null0 00:27:44.563 [2024-07-25 01:28:06.836855] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:44.563 [2024-07-25 01:28:06.861013] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:44.563 01:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.563 01:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:44.563 01:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:44.563 01:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:44.563 01:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:44.563 01:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:44.563 01:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:44.563 01:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:44.563 01:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1043037 00:27:44.563 01:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1043037 /var/tmp/bperf.sock 00:27:44.563 01:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:44.563 01:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1043037 ']' 00:27:44.563 01:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:44.563 01:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:44.563 01:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:44.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:44.563 01:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:44.563 01:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:44.563 [2024-07-25 01:28:06.912793] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:27:44.563 [2024-07-25 01:28:06.912834] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1043037 ] 00:27:44.563 EAL: No free 2048 kB hugepages reported on node 1 00:27:44.563 [2024-07-25 01:28:06.966513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.563 [2024-07-25 01:28:07.045451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:45.504 01:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:45.504 01:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:27:45.504 01:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:45.504 01:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:45.504 01:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:45.504 01:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:45.504 01:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:46.074 nvme0n1 00:27:46.075 01:28:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:46.075 01:28:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:46.075 Running I/O for 2 seconds... 00:27:47.988 00:27:47.988 Latency(us) 00:27:47.988 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:47.988 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:47.988 nvme0n1 : 2.00 25852.02 100.98 0.00 0.00 4946.00 2436.23 23137.06 00:27:47.988 =================================================================================================================== 00:27:47.988 Total : 25852.02 100.98 0.00 0.00 4946.00 2436.23 23137.06 00:27:47.988 0 00:27:47.988 01:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:47.988 01:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:48.249 01:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:48.249 01:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:48.249 | select(.opcode=="crc32c") 00:27:48.249 | "\(.module_name) \(.executed)"' 00:27:48.249 01:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:48.249 01:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:48.249 01:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:48.249 01:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:48.249 01:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:48.249 01:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1043037 00:27:48.249 01:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1043037 ']' 00:27:48.249 01:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1043037 00:27:48.249 01:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:27:48.249 01:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:48.249 01:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1043037 00:27:48.249 01:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:48.249 01:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:48.249 01:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1043037' 00:27:48.249 killing process with pid 1043037 00:27:48.249 01:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1043037 00:27:48.249 Received shutdown signal, test time was about 2.000000 seconds 00:27:48.249 00:27:48.249 Latency(us) 00:27:48.249 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:48.249 =================================================================================================================== 00:27:48.249 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:48.249 01:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1043037 00:27:48.509 01:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:27:48.509 01:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:48.509 01:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:48.509 01:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:48.509 01:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:48.509 01:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:48.509 01:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:48.509 01:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1043731 00:27:48.509 01:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1043731 /var/tmp/bperf.sock 00:27:48.509 01:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:48.509 01:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1043731 ']' 00:27:48.509 01:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:48.509 01:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:48.509 01:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:48.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:48.509 01:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:48.509 01:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:48.509 [2024-07-25 01:28:10.934461] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:27:48.509 [2024-07-25 01:28:10.934508] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1043731 ] 00:27:48.509 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:48.509 Zero copy mechanism will not be used. 00:27:48.509 EAL: No free 2048 kB hugepages reported on node 1 00:27:48.510 [2024-07-25 01:28:10.988411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.770 [2024-07-25 01:28:11.068451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:49.341 01:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:49.341 01:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:27:49.341 01:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:49.341 01:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:49.341 01:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:49.601 01:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:49.601 01:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:49.874 nvme0n1 00:27:49.874 01:28:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:49.874 01:28:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:49.874 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:49.874 Zero copy mechanism will not be used. 00:27:49.874 Running I/O for 2 seconds... 00:27:52.468 00:27:52.468 Latency(us) 00:27:52.468 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:52.469 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:52.469 nvme0n1 : 2.00 2147.54 268.44 0.00 0.00 7446.98 6553.60 32141.13 00:27:52.469 =================================================================================================================== 00:27:52.469 Total : 2147.54 268.44 0.00 0.00 7446.98 6553.60 32141.13 00:27:52.469 0 00:27:52.469 01:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:52.469 01:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:52.469 01:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:52.469 01:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:52.469 | select(.opcode=="crc32c") 00:27:52.469 | "\(.module_name) \(.executed)"' 00:27:52.469 01:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:52.469 01:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:52.469 01:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:52.469 01:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:52.469 01:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:52.469 01:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1043731 00:27:52.469 01:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1043731 ']' 00:27:52.469 01:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1043731 00:27:52.469 01:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:27:52.469 01:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:52.469 01:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1043731 00:27:52.469 01:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:52.469 01:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:52.469 01:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1043731' 00:27:52.469 killing process with pid 1043731 00:27:52.469 01:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1043731 00:27:52.469 Received shutdown signal, test time was about 2.000000 seconds 00:27:52.469 00:27:52.469 Latency(us) 00:27:52.469 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:52.469 =================================================================================================================== 00:27:52.469 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:52.469 01:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1043731 00:27:52.469 01:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:27:52.469 01:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:52.469 01:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:52.469 01:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:52.469 01:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:52.469 01:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:52.469 01:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:52.469 01:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1044230 00:27:52.469 01:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1044230 /var/tmp/bperf.sock 00:27:52.469 01:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:52.469 01:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1044230 ']' 00:27:52.469 01:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:52.469 01:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:52.469 01:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:52.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:52.469 01:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:52.469 01:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:52.469 [2024-07-25 01:28:14.812118] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:27:52.469 [2024-07-25 01:28:14.812166] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1044230 ] 00:27:52.469 EAL: No free 2048 kB hugepages reported on node 1 00:27:52.469 [2024-07-25 01:28:14.865997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.469 [2024-07-25 01:28:14.943319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:53.411 01:28:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:53.411 01:28:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:27:53.411 01:28:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:53.411 01:28:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:53.411 01:28:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:53.411 01:28:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:53.411 01:28:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:53.672 nvme0n1 00:27:53.672 01:28:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:53.672 01:28:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:53.932 Running I/O for 2 seconds... 00:27:55.844 00:27:55.844 Latency(us) 00:27:55.844 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:55.844 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:55.844 nvme0n1 : 2.01 26798.78 104.68 0.00 0.00 4763.54 2436.23 34420.65 00:27:55.844 =================================================================================================================== 00:27:55.844 Total : 26798.78 104.68 0.00 0.00 4763.54 2436.23 34420.65 00:27:55.844 0 00:27:55.844 01:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:55.844 01:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:55.844 01:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:55.844 01:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:55.844 | select(.opcode=="crc32c") 00:27:55.844 | "\(.module_name) \(.executed)"' 00:27:55.844 01:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:56.105 01:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:56.105 01:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:56.105 01:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:56.105 01:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:56.105 01:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1044230 00:27:56.105 01:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1044230 ']' 00:27:56.105 01:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1044230 00:27:56.105 01:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:27:56.105 01:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:56.105 01:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1044230 00:27:56.105 01:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:56.105 01:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:56.105 01:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1044230' 00:27:56.105 killing process with pid 1044230 00:27:56.105 01:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1044230 00:27:56.105 Received shutdown signal, test time was about 2.000000 seconds 00:27:56.105 00:27:56.105 Latency(us) 00:27:56.105 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:56.105 =================================================================================================================== 00:27:56.105 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:56.105 01:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1044230 00:27:56.366 01:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:27:56.366 01:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:56.366 01:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:56.366 01:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:56.366 01:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:56.366 01:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:56.366 01:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:56.366 01:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1044911 00:27:56.366 01:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1044911 /var/tmp/bperf.sock 00:27:56.366 01:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:56.366 01:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1044911 ']' 00:27:56.366 01:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:56.366 01:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:56.366 01:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:56.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:56.366 01:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:56.366 01:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:56.366 [2024-07-25 01:28:18.675574] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:27:56.366 [2024-07-25 01:28:18.675625] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1044911 ] 00:27:56.366 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:56.366 Zero copy mechanism will not be used. 00:27:56.366 EAL: No free 2048 kB hugepages reported on node 1 00:27:56.366 [2024-07-25 01:28:18.730718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.366 [2024-07-25 01:28:18.798717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:57.308 01:28:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:57.308 01:28:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:27:57.308 01:28:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:57.308 01:28:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:57.308 01:28:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:57.308 01:28:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:57.308 01:28:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:57.568 nvme0n1 00:27:57.568 01:28:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:57.568 01:28:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:57.828 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:57.828 Zero copy mechanism will not be used. 00:27:57.828 Running I/O for 2 seconds... 00:27:59.736 00:27:59.736 Latency(us) 00:27:59.736 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:59.736 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:59.736 nvme0n1 : 2.01 1372.67 171.58 0.00 0.00 11622.35 8548.17 37384.01 00:27:59.736 =================================================================================================================== 00:27:59.736 Total : 1372.67 171.58 0.00 0.00 11622.35 8548.17 37384.01 00:27:59.736 0 00:27:59.736 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:59.736 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:59.736 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:59.736 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:59.736 | select(.opcode=="crc32c") 00:27:59.736 | "\(.module_name) \(.executed)"' 00:27:59.736 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:59.996 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:59.996 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:59.996 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:59.996 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:59.996 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1044911 00:27:59.996 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1044911 ']' 00:27:59.996 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1044911 00:27:59.996 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:27:59.996 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:59.996 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1044911 00:27:59.996 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:59.996 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:59.996 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1044911' 00:27:59.996 killing process with pid 1044911 00:27:59.996 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1044911 00:27:59.996 Received shutdown signal, test time was about 2.000000 seconds 00:27:59.996 00:27:59.996 Latency(us) 00:27:59.996 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:59.996 =================================================================================================================== 00:27:59.996 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:59.996 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1044911 00:28:00.255 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1042790 00:28:00.255 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1042790 ']' 00:28:00.255 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1042790 00:28:00.255 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:00.255 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:00.255 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1042790 00:28:00.255 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:00.255 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:00.255 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1042790' 00:28:00.256 killing process with pid 1042790 00:28:00.256 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1042790 00:28:00.256 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1042790 00:28:00.515 00:28:00.515 real 0m16.959s 00:28:00.515 user 0m33.670s 00:28:00.515 sys 0m3.358s 00:28:00.515 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:00.515 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:00.515 ************************************ 00:28:00.515 END TEST nvmf_digest_clean 00:28:00.515 ************************************ 00:28:00.515 01:28:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:28:00.515 01:28:22 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:00.515 01:28:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:00.515 01:28:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:00.515 01:28:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:00.515 ************************************ 00:28:00.515 START TEST nvmf_digest_error 00:28:00.516 ************************************ 00:28:00.516 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:28:00.516 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:00.516 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:00.516 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:00.516 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:00.516 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1045633 00:28:00.516 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1045633 00:28:00.516 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:00.516 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1045633 ']' 00:28:00.516 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:00.516 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:00.516 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:00.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:00.516 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:00.516 01:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:00.516 [2024-07-25 01:28:22.938791] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:28:00.516 [2024-07-25 01:28:22.938831] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:00.516 EAL: No free 2048 kB hugepages reported on node 1 00:28:00.516 [2024-07-25 01:28:22.994330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.778 [2024-07-25 01:28:23.074653] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:00.778 [2024-07-25 01:28:23.074689] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:00.778 [2024-07-25 01:28:23.074696] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:00.778 [2024-07-25 01:28:23.074702] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:00.778 [2024-07-25 01:28:23.074707] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:00.778 [2024-07-25 01:28:23.074724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:01.348 01:28:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:01.348 01:28:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:01.348 01:28:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:01.348 01:28:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:01.348 01:28:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:01.348 01:28:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:01.348 01:28:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:01.348 01:28:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.348 01:28:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:01.348 [2024-07-25 01:28:23.776753] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:01.348 01:28:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.348 01:28:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:01.348 01:28:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:01.348 01:28:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.348 01:28:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:01.608 null0 00:28:01.608 [2024-07-25 01:28:23.865429] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:01.608 [2024-07-25 01:28:23.889586] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:01.608 01:28:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.608 01:28:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:01.608 01:28:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:01.608 01:28:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:01.608 01:28:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:01.608 01:28:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:01.608 01:28:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1045878 00:28:01.608 01:28:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1045878 /var/tmp/bperf.sock 00:28:01.608 01:28:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:01.608 01:28:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1045878 ']' 00:28:01.608 01:28:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:01.608 01:28:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:01.608 01:28:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:01.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:01.608 01:28:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:01.608 01:28:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:01.608 [2024-07-25 01:28:23.938391] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:28:01.608 [2024-07-25 01:28:23.938430] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1045878 ] 00:28:01.608 EAL: No free 2048 kB hugepages reported on node 1 00:28:01.608 [2024-07-25 01:28:23.992444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:01.608 [2024-07-25 01:28:24.071474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:02.550 01:28:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:02.550 01:28:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:02.550 01:28:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:02.550 01:28:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:02.550 01:28:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:02.550 01:28:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.550 01:28:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:02.550 01:28:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.550 01:28:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:02.550 01:28:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:03.121 nvme0n1 00:28:03.121 01:28:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:03.121 01:28:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.121 01:28:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:03.121 01:28:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.121 01:28:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:03.121 01:28:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:03.121 Running I/O for 2 seconds... 00:28:03.121 [2024-07-25 01:28:25.488011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.121 [2024-07-25 01:28:25.488049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.121 [2024-07-25 01:28:25.488059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.121 [2024-07-25 01:28:25.500063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.121 [2024-07-25 01:28:25.500087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.121 [2024-07-25 01:28:25.500096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.121 [2024-07-25 01:28:25.509011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.121 [2024-07-25 01:28:25.509032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.121 [2024-07-25 01:28:25.509040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.121 [2024-07-25 01:28:25.520857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.121 [2024-07-25 01:28:25.520878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.121 [2024-07-25 01:28:25.520887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.121 [2024-07-25 01:28:25.531723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.121 [2024-07-25 01:28:25.531744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.121 [2024-07-25 01:28:25.531752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.121 [2024-07-25 01:28:25.540220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.121 [2024-07-25 01:28:25.540240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.121 [2024-07-25 01:28:25.540249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.121 [2024-07-25 01:28:25.550606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.121 [2024-07-25 01:28:25.550626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.122 [2024-07-25 01:28:25.550634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.122 [2024-07-25 01:28:25.559607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.122 [2024-07-25 01:28:25.559627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.122 [2024-07-25 01:28:25.559639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.122 [2024-07-25 01:28:25.570560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.122 [2024-07-25 01:28:25.570580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.122 [2024-07-25 01:28:25.570589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.122 [2024-07-25 01:28:25.579550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.122 [2024-07-25 01:28:25.579570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.122 [2024-07-25 01:28:25.579578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.122 [2024-07-25 01:28:25.588771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.122 [2024-07-25 01:28:25.588791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:11964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.122 [2024-07-25 01:28:25.588799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.122 [2024-07-25 01:28:25.598997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.122 [2024-07-25 01:28:25.599017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.122 [2024-07-25 01:28:25.599026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.122 [2024-07-25 01:28:25.608602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.122 [2024-07-25 01:28:25.608622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.122 [2024-07-25 01:28:25.608629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.382 [2024-07-25 01:28:25.617641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.383 [2024-07-25 01:28:25.617662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.383 [2024-07-25 01:28:25.617670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.383 [2024-07-25 01:28:25.627331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.383 [2024-07-25 01:28:25.627349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.383 [2024-07-25 01:28:25.627357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.383 [2024-07-25 01:28:25.637222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.383 [2024-07-25 01:28:25.637242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.383 [2024-07-25 01:28:25.637249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.383 [2024-07-25 01:28:25.647854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.383 [2024-07-25 01:28:25.647874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.383 [2024-07-25 01:28:25.647882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.383 [2024-07-25 01:28:25.656109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.383 [2024-07-25 01:28:25.656129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.383 [2024-07-25 01:28:25.656136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.383 [2024-07-25 01:28:25.666833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.383 [2024-07-25 01:28:25.666852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.383 [2024-07-25 01:28:25.666860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.383 [2024-07-25 01:28:25.675470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.383 [2024-07-25 01:28:25.675490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.383 [2024-07-25 01:28:25.675497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.383 [2024-07-25 01:28:25.685340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.383 [2024-07-25 01:28:25.685360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.383 [2024-07-25 01:28:25.685368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.383 [2024-07-25 01:28:25.694761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.383 [2024-07-25 01:28:25.694781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.383 [2024-07-25 01:28:25.694789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.383 [2024-07-25 01:28:25.703976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.383 [2024-07-25 01:28:25.703995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.383 [2024-07-25 01:28:25.704003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.383 [2024-07-25 01:28:25.713670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.383 [2024-07-25 01:28:25.713690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.383 [2024-07-25 01:28:25.713697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.383 [2024-07-25 01:28:25.722344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.383 [2024-07-25 01:28:25.722363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.383 [2024-07-25 01:28:25.722375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.383 [2024-07-25 01:28:25.731963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.383 [2024-07-25 01:28:25.731982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.383 [2024-07-25 01:28:25.731990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.383 [2024-07-25 01:28:25.740880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.383 [2024-07-25 01:28:25.740898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.383 [2024-07-25 01:28:25.740906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.383 [2024-07-25 01:28:25.750702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.383 [2024-07-25 01:28:25.750722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:25114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.383 [2024-07-25 01:28:25.750729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.383 [2024-07-25 01:28:25.760283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.383 [2024-07-25 01:28:25.760302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.383 [2024-07-25 01:28:25.760309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.383 [2024-07-25 01:28:25.768930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.383 [2024-07-25 01:28:25.768949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.383 [2024-07-25 01:28:25.768956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.383 [2024-07-25 01:28:25.779824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.383 [2024-07-25 01:28:25.779844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.383 [2024-07-25 01:28:25.779852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.383 [2024-07-25 01:28:25.787843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.383 [2024-07-25 01:28:25.787863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.383 [2024-07-25 01:28:25.787871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.383 [2024-07-25 01:28:25.797975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.383 [2024-07-25 01:28:25.797994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.383 [2024-07-25 01:28:25.798002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.383 [2024-07-25 01:28:25.808132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.383 [2024-07-25 01:28:25.808155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.383 [2024-07-25 01:28:25.808163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.383 [2024-07-25 01:28:25.816821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.383 [2024-07-25 01:28:25.816840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.383 [2024-07-25 01:28:25.816848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.383 [2024-07-25 01:28:25.826102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.383 [2024-07-25 01:28:25.826121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.383 [2024-07-25 01:28:25.826129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.383 [2024-07-25 01:28:25.836339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.383 [2024-07-25 01:28:25.836359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.383 [2024-07-25 01:28:25.836366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.383 [2024-07-25 01:28:25.845941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.383 [2024-07-25 01:28:25.845960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.383 [2024-07-25 01:28:25.845968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.383 [2024-07-25 01:28:25.855580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.383 [2024-07-25 01:28:25.855600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.383 [2024-07-25 01:28:25.855608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.383 [2024-07-25 01:28:25.864267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.383 [2024-07-25 01:28:25.864286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.384 [2024-07-25 01:28:25.864294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.384 [2024-07-25 01:28:25.874288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.644 [2024-07-25 01:28:25.874309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.645 [2024-07-25 01:28:25.874318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.645 [2024-07-25 01:28:25.883253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.645 [2024-07-25 01:28:25.883273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.645 [2024-07-25 01:28:25.883281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.645 [2024-07-25 01:28:25.892735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.645 [2024-07-25 01:28:25.892755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.645 [2024-07-25 01:28:25.892762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.645 [2024-07-25 01:28:25.901910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.645 [2024-07-25 01:28:25.901928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.645 [2024-07-25 01:28:25.901936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.645 [2024-07-25 01:28:25.911727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.645 [2024-07-25 01:28:25.911747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.645 [2024-07-25 01:28:25.911755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.645 [2024-07-25 01:28:25.921157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.645 [2024-07-25 01:28:25.921176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.645 [2024-07-25 01:28:25.921184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.645 [2024-07-25 01:28:25.930661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.645 [2024-07-25 01:28:25.930679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.645 [2024-07-25 01:28:25.930687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.645 [2024-07-25 01:28:25.940243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.645 [2024-07-25 01:28:25.940262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.645 [2024-07-25 01:28:25.940270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.645 [2024-07-25 01:28:25.949547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.645 [2024-07-25 01:28:25.949566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.645 [2024-07-25 01:28:25.949574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.645 [2024-07-25 01:28:25.959089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.645 [2024-07-25 01:28:25.959108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.645 [2024-07-25 01:28:25.959116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.645 [2024-07-25 01:28:25.967615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.645 [2024-07-25 01:28:25.967633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.645 [2024-07-25 01:28:25.967645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.645 [2024-07-25 01:28:25.978314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.645 [2024-07-25 01:28:25.978333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.645 [2024-07-25 01:28:25.978341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.645 [2024-07-25 01:28:25.986618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.645 [2024-07-25 01:28:25.986637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.645 [2024-07-25 01:28:25.986645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.645 [2024-07-25 01:28:25.997433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.645 [2024-07-25 01:28:25.997452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.645 [2024-07-25 01:28:25.997460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.645 [2024-07-25 01:28:26.005841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.645 [2024-07-25 01:28:26.005859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.645 [2024-07-25 01:28:26.005867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.645 [2024-07-25 01:28:26.015924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.645 [2024-07-25 01:28:26.015944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.645 [2024-07-25 01:28:26.015952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.645 [2024-07-25 01:28:26.025642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.645 [2024-07-25 01:28:26.025661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.645 [2024-07-25 01:28:26.025669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.645 [2024-07-25 01:28:26.034919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.645 [2024-07-25 01:28:26.034938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.645 [2024-07-25 01:28:26.034946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.645 [2024-07-25 01:28:26.045210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.645 [2024-07-25 01:28:26.045230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.645 [2024-07-25 01:28:26.045238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.645 [2024-07-25 01:28:26.053468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.645 [2024-07-25 01:28:26.053491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.645 [2024-07-25 01:28:26.053499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.645 [2024-07-25 01:28:26.063153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.645 [2024-07-25 01:28:26.063171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.645 [2024-07-25 01:28:26.063179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.645 [2024-07-25 01:28:26.072395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.645 [2024-07-25 01:28:26.072415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.645 [2024-07-25 01:28:26.072423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.645 [2024-07-25 01:28:26.081595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.645 [2024-07-25 01:28:26.081615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.645 [2024-07-25 01:28:26.081623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.645 [2024-07-25 01:28:26.091130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.645 [2024-07-25 01:28:26.091150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.645 [2024-07-25 01:28:26.091157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.645 [2024-07-25 01:28:26.100120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.645 [2024-07-25 01:28:26.100140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.645 [2024-07-25 01:28:26.100148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.645 [2024-07-25 01:28:26.110829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.646 [2024-07-25 01:28:26.110849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.646 [2024-07-25 01:28:26.110857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.646 [2024-07-25 01:28:26.119659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.646 [2024-07-25 01:28:26.119679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.646 [2024-07-25 01:28:26.119687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.646 [2024-07-25 01:28:26.130697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.646 [2024-07-25 01:28:26.130718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.646 [2024-07-25 01:28:26.130729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.906 [2024-07-25 01:28:26.140207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.906 [2024-07-25 01:28:26.140228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.906 [2024-07-25 01:28:26.140236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.906 [2024-07-25 01:28:26.148622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.906 [2024-07-25 01:28:26.148642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.906 [2024-07-25 01:28:26.148650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.906 [2024-07-25 01:28:26.159664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.906 [2024-07-25 01:28:26.159683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-07-25 01:28:26.159691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.907 [2024-07-25 01:28:26.168827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.907 [2024-07-25 01:28:26.168846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-07-25 01:28:26.168854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.907 [2024-07-25 01:28:26.177437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.907 [2024-07-25 01:28:26.177457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-07-25 01:28:26.177465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.907 [2024-07-25 01:28:26.187988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.907 [2024-07-25 01:28:26.188008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-07-25 01:28:26.188016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.907 [2024-07-25 01:28:26.196989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.907 [2024-07-25 01:28:26.197009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-07-25 01:28:26.197017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.907 [2024-07-25 01:28:26.206853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.907 [2024-07-25 01:28:26.206873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-07-25 01:28:26.206881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.907 [2024-07-25 01:28:26.216118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.907 [2024-07-25 01:28:26.216142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-07-25 01:28:26.216150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.907 [2024-07-25 01:28:26.225948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.907 [2024-07-25 01:28:26.225967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-07-25 01:28:26.225975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.907 [2024-07-25 01:28:26.234945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.907 [2024-07-25 01:28:26.234965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-07-25 01:28:26.234973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.907 [2024-07-25 01:28:26.243997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.907 [2024-07-25 01:28:26.244017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-07-25 01:28:26.244025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.907 [2024-07-25 01:28:26.253428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.907 [2024-07-25 01:28:26.253448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-07-25 01:28:26.253456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.907 [2024-07-25 01:28:26.263653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.907 [2024-07-25 01:28:26.263673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-07-25 01:28:26.263681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.907 [2024-07-25 01:28:26.272725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.907 [2024-07-25 01:28:26.272745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-07-25 01:28:26.272753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.907 [2024-07-25 01:28:26.282765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.907 [2024-07-25 01:28:26.282783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-07-25 01:28:26.282791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.907 [2024-07-25 01:28:26.291501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.907 [2024-07-25 01:28:26.291520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-07-25 01:28:26.291529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.907 [2024-07-25 01:28:26.300725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.907 [2024-07-25 01:28:26.300745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-07-25 01:28:26.300753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.907 [2024-07-25 01:28:26.310555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.907 [2024-07-25 01:28:26.310574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-07-25 01:28:26.310582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.907 [2024-07-25 01:28:26.319710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.907 [2024-07-25 01:28:26.319729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-07-25 01:28:26.319737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.907 [2024-07-25 01:28:26.329661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.907 [2024-07-25 01:28:26.329680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-07-25 01:28:26.329688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.907 [2024-07-25 01:28:26.338891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.907 [2024-07-25 01:28:26.338910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-07-25 01:28:26.338918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.907 [2024-07-25 01:28:26.348825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.907 [2024-07-25 01:28:26.348845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-07-25 01:28:26.348852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.907 [2024-07-25 01:28:26.356881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.907 [2024-07-25 01:28:26.356900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-07-25 01:28:26.356909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.907 [2024-07-25 01:28:26.367951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.907 [2024-07-25 01:28:26.367971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-07-25 01:28:26.367978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.907 [2024-07-25 01:28:26.375885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.907 [2024-07-25 01:28:26.375905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-07-25 01:28:26.375918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.907 [2024-07-25 01:28:26.386827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.907 [2024-07-25 01:28:26.386847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-07-25 01:28:26.386855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.907 [2024-07-25 01:28:26.395762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:03.907 [2024-07-25 01:28:26.395782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-07-25 01:28:26.395790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.168 [2024-07-25 01:28:26.405570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.168 [2024-07-25 01:28:26.405591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.168 [2024-07-25 01:28:26.405600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.168 [2024-07-25 01:28:26.415486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.168 [2024-07-25 01:28:26.415506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.168 [2024-07-25 01:28:26.415514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.168 [2024-07-25 01:28:26.425587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.168 [2024-07-25 01:28:26.425607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.168 [2024-07-25 01:28:26.425615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.168 [2024-07-25 01:28:26.434398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.168 [2024-07-25 01:28:26.434417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.168 [2024-07-25 01:28:26.434424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.168 [2024-07-25 01:28:26.444701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.168 [2024-07-25 01:28:26.444721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.168 [2024-07-25 01:28:26.444729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.168 [2024-07-25 01:28:26.452887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.168 [2024-07-25 01:28:26.452907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.168 [2024-07-25 01:28:26.452915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.168 [2024-07-25 01:28:26.463395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.169 [2024-07-25 01:28:26.463422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.169 [2024-07-25 01:28:26.463430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.169 [2024-07-25 01:28:26.472544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.169 [2024-07-25 01:28:26.472564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.169 [2024-07-25 01:28:26.472572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.169 [2024-07-25 01:28:26.481596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.169 [2024-07-25 01:28:26.481615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.169 [2024-07-25 01:28:26.481623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.169 [2024-07-25 01:28:26.491198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.169 [2024-07-25 01:28:26.491217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.169 [2024-07-25 01:28:26.491225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.169 [2024-07-25 01:28:26.500586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.169 [2024-07-25 01:28:26.500606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.169 [2024-07-25 01:28:26.500614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.169 [2024-07-25 01:28:26.509938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.169 [2024-07-25 01:28:26.509957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.169 [2024-07-25 01:28:26.509965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.169 [2024-07-25 01:28:26.519474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.169 [2024-07-25 01:28:26.519492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.169 [2024-07-25 01:28:26.519500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.169 [2024-07-25 01:28:26.528581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.169 [2024-07-25 01:28:26.528600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.169 [2024-07-25 01:28:26.528607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.169 [2024-07-25 01:28:26.538407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.169 [2024-07-25 01:28:26.538426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.169 [2024-07-25 01:28:26.538433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.169 [2024-07-25 01:28:26.547979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.169 [2024-07-25 01:28:26.547998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.169 [2024-07-25 01:28:26.548005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.169 [2024-07-25 01:28:26.556346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.169 [2024-07-25 01:28:26.556365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.169 [2024-07-25 01:28:26.556373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.169 [2024-07-25 01:28:26.566455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.169 [2024-07-25 01:28:26.566474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.169 [2024-07-25 01:28:26.566481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.169 [2024-07-25 01:28:26.575471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.169 [2024-07-25 01:28:26.575490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.169 [2024-07-25 01:28:26.575498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.169 [2024-07-25 01:28:26.584915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.169 [2024-07-25 01:28:26.584934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.169 [2024-07-25 01:28:26.584942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.169 [2024-07-25 01:28:26.594741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.169 [2024-07-25 01:28:26.594759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.169 [2024-07-25 01:28:26.594767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.169 [2024-07-25 01:28:26.603437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.169 [2024-07-25 01:28:26.603456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.169 [2024-07-25 01:28:26.603463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.169 [2024-07-25 01:28:26.613027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.169 [2024-07-25 01:28:26.613052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.169 [2024-07-25 01:28:26.613060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.169 [2024-07-25 01:28:26.622123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.169 [2024-07-25 01:28:26.622145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.169 [2024-07-25 01:28:26.622153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.169 [2024-07-25 01:28:26.632219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.169 [2024-07-25 01:28:26.632238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.169 [2024-07-25 01:28:26.632246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.169 [2024-07-25 01:28:26.641266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.169 [2024-07-25 01:28:26.641286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.169 [2024-07-25 01:28:26.641294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.169 [2024-07-25 01:28:26.652023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.169 [2024-07-25 01:28:26.652041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.169 [2024-07-25 01:28:26.652054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.430 [2024-07-25 01:28:26.660516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.430 [2024-07-25 01:28:26.660536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.430 [2024-07-25 01:28:26.660544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.430 [2024-07-25 01:28:26.669958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.430 [2024-07-25 01:28:26.669978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.430 [2024-07-25 01:28:26.669986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.430 [2024-07-25 01:28:26.679706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.430 [2024-07-25 01:28:26.679725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.430 [2024-07-25 01:28:26.679732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.430 [2024-07-25 01:28:26.689334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.430 [2024-07-25 01:28:26.689353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.430 [2024-07-25 01:28:26.689360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.430 [2024-07-25 01:28:26.698771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.430 [2024-07-25 01:28:26.698791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.430 [2024-07-25 01:28:26.698799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.430 [2024-07-25 01:28:26.707499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.430 [2024-07-25 01:28:26.707519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.431 [2024-07-25 01:28:26.707526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.431 [2024-07-25 01:28:26.717457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.431 [2024-07-25 01:28:26.717476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.431 [2024-07-25 01:28:26.717483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.431 [2024-07-25 01:28:26.727175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.431 [2024-07-25 01:28:26.727194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.431 [2024-07-25 01:28:26.727202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.431 [2024-07-25 01:28:26.737671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.431 [2024-07-25 01:28:26.737690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.431 [2024-07-25 01:28:26.737698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.431 [2024-07-25 01:28:26.747729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.431 [2024-07-25 01:28:26.747748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.431 [2024-07-25 01:28:26.747756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.431 [2024-07-25 01:28:26.756736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.431 [2024-07-25 01:28:26.756755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.431 [2024-07-25 01:28:26.756763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.431 [2024-07-25 01:28:26.767851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.431 [2024-07-25 01:28:26.767871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.431 [2024-07-25 01:28:26.767879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.431 [2024-07-25 01:28:26.777485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.431 [2024-07-25 01:28:26.777505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.431 [2024-07-25 01:28:26.777513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.431 [2024-07-25 01:28:26.787357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.431 [2024-07-25 01:28:26.787376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.431 [2024-07-25 01:28:26.787387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.431 [2024-07-25 01:28:26.801052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.431 [2024-07-25 01:28:26.801087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.431 [2024-07-25 01:28:26.801095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.431 [2024-07-25 01:28:26.810153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.431 [2024-07-25 01:28:26.810172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.431 [2024-07-25 01:28:26.810180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.431 [2024-07-25 01:28:26.819569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.431 [2024-07-25 01:28:26.819587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.431 [2024-07-25 01:28:26.819595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.431 [2024-07-25 01:28:26.829007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.431 [2024-07-25 01:28:26.829026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.431 [2024-07-25 01:28:26.829034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.431 [2024-07-25 01:28:26.838035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.431 [2024-07-25 01:28:26.838058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.431 [2024-07-25 01:28:26.838066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.431 [2024-07-25 01:28:26.848965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.431 [2024-07-25 01:28:26.848984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.431 [2024-07-25 01:28:26.848992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.431 [2024-07-25 01:28:26.857320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.431 [2024-07-25 01:28:26.857338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.431 [2024-07-25 01:28:26.857346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.431 [2024-07-25 01:28:26.866581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.431 [2024-07-25 01:28:26.866600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.431 [2024-07-25 01:28:26.866608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.431 [2024-07-25 01:28:26.876497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.431 [2024-07-25 01:28:26.876520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.431 [2024-07-25 01:28:26.876528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.431 [2024-07-25 01:28:26.886555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.431 [2024-07-25 01:28:26.886574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.431 [2024-07-25 01:28:26.886581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.431 [2024-07-25 01:28:26.896429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.431 [2024-07-25 01:28:26.896448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.431 [2024-07-25 01:28:26.896455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.431 [2024-07-25 01:28:26.905013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.431 [2024-07-25 01:28:26.905032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.431 [2024-07-25 01:28:26.905039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.431 [2024-07-25 01:28:26.915977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.431 [2024-07-25 01:28:26.915997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.431 [2024-07-25 01:28:26.916005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.692 [2024-07-25 01:28:26.925141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.692 [2024-07-25 01:28:26.925161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.692 [2024-07-25 01:28:26.925169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.692 [2024-07-25 01:28:26.935762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.692 [2024-07-25 01:28:26.935782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.692 [2024-07-25 01:28:26.935789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.692 [2024-07-25 01:28:26.947660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.692 [2024-07-25 01:28:26.947680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.692 [2024-07-25 01:28:26.947687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.692 [2024-07-25 01:28:26.960256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.692 [2024-07-25 01:28:26.960275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.692 [2024-07-25 01:28:26.960283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.692 [2024-07-25 01:28:26.968453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.692 [2024-07-25 01:28:26.968472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.692 [2024-07-25 01:28:26.968479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.692 [2024-07-25 01:28:26.977992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.692 [2024-07-25 01:28:26.978012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.692 [2024-07-25 01:28:26.978019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.692 [2024-07-25 01:28:26.988927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.692 [2024-07-25 01:28:26.988946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.693 [2024-07-25 01:28:26.988954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.693 [2024-07-25 01:28:26.997010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.693 [2024-07-25 01:28:26.997029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.693 [2024-07-25 01:28:26.997037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.693 [2024-07-25 01:28:27.008072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.693 [2024-07-25 01:28:27.008091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.693 [2024-07-25 01:28:27.008099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.693 [2024-07-25 01:28:27.017503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.693 [2024-07-25 01:28:27.017522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.693 [2024-07-25 01:28:27.017529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.693 [2024-07-25 01:28:27.027608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.693 [2024-07-25 01:28:27.027627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.693 [2024-07-25 01:28:27.027634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.693 [2024-07-25 01:28:27.037981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.693 [2024-07-25 01:28:27.038000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.693 [2024-07-25 01:28:27.038007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.693 [2024-07-25 01:28:27.047684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.693 [2024-07-25 01:28:27.047703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.693 [2024-07-25 01:28:27.047714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.693 [2024-07-25 01:28:27.065074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.693 [2024-07-25 01:28:27.065093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.693 [2024-07-25 01:28:27.065101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.693 [2024-07-25 01:28:27.075333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.693 [2024-07-25 01:28:27.075352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.693 [2024-07-25 01:28:27.075360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.693 [2024-07-25 01:28:27.085095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.693 [2024-07-25 01:28:27.085115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.693 [2024-07-25 01:28:27.085122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.693 [2024-07-25 01:28:27.097553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.693 [2024-07-25 01:28:27.097572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.693 [2024-07-25 01:28:27.097579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.693 [2024-07-25 01:28:27.109488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.693 [2024-07-25 01:28:27.109507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.693 [2024-07-25 01:28:27.109515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.693 [2024-07-25 01:28:27.118187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.693 [2024-07-25 01:28:27.118206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.693 [2024-07-25 01:28:27.118213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.693 [2024-07-25 01:28:27.131194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.693 [2024-07-25 01:28:27.131213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.693 [2024-07-25 01:28:27.131221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.693 [2024-07-25 01:28:27.143055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.693 [2024-07-25 01:28:27.143075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.693 [2024-07-25 01:28:27.143082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.693 [2024-07-25 01:28:27.152696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.693 [2024-07-25 01:28:27.152715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.693 [2024-07-25 01:28:27.152723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.693 [2024-07-25 01:28:27.161936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.693 [2024-07-25 01:28:27.161955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.693 [2024-07-25 01:28:27.161963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.693 [2024-07-25 01:28:27.172672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.693 [2024-07-25 01:28:27.172691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.693 [2024-07-25 01:28:27.172699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.693 [2024-07-25 01:28:27.181605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.693 [2024-07-25 01:28:27.181624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.693 [2024-07-25 01:28:27.181632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.954 [2024-07-25 01:28:27.197090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.954 [2024-07-25 01:28:27.197111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.954 [2024-07-25 01:28:27.197119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.954 [2024-07-25 01:28:27.207636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.954 [2024-07-25 01:28:27.207654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.954 [2024-07-25 01:28:27.207662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.954 [2024-07-25 01:28:27.217093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.954 [2024-07-25 01:28:27.217112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.954 [2024-07-25 01:28:27.217120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.954 [2024-07-25 01:28:27.225750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.954 [2024-07-25 01:28:27.225768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.954 [2024-07-25 01:28:27.225775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.954 [2024-07-25 01:28:27.236650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.954 [2024-07-25 01:28:27.236669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.954 [2024-07-25 01:28:27.236680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.954 [2024-07-25 01:28:27.249928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.954 [2024-07-25 01:28:27.249947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.954 [2024-07-25 01:28:27.249955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.954 [2024-07-25 01:28:27.259551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.954 [2024-07-25 01:28:27.259570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.954 [2024-07-25 01:28:27.259578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.954 [2024-07-25 01:28:27.269677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.954 [2024-07-25 01:28:27.269696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.954 [2024-07-25 01:28:27.269703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.954 [2024-07-25 01:28:27.282476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.954 [2024-07-25 01:28:27.282495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.954 [2024-07-25 01:28:27.282502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.954 [2024-07-25 01:28:27.293345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.954 [2024-07-25 01:28:27.293365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.954 [2024-07-25 01:28:27.293372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.954 [2024-07-25 01:28:27.302929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.954 [2024-07-25 01:28:27.302947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.954 [2024-07-25 01:28:27.302955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.954 [2024-07-25 01:28:27.316638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.954 [2024-07-25 01:28:27.316658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.954 [2024-07-25 01:28:27.316666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.954 [2024-07-25 01:28:27.325248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.954 [2024-07-25 01:28:27.325268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.954 [2024-07-25 01:28:27.325276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.954 [2024-07-25 01:28:27.335446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.954 [2024-07-25 01:28:27.335468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.954 [2024-07-25 01:28:27.335476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.955 [2024-07-25 01:28:27.344804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.955 [2024-07-25 01:28:27.344824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.955 [2024-07-25 01:28:27.344832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.955 [2024-07-25 01:28:27.354507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.955 [2024-07-25 01:28:27.354526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.955 [2024-07-25 01:28:27.354534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.955 [2024-07-25 01:28:27.364413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.955 [2024-07-25 01:28:27.364432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.955 [2024-07-25 01:28:27.364440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.955 [2024-07-25 01:28:27.378393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.955 [2024-07-25 01:28:27.378412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.955 [2024-07-25 01:28:27.378420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.955 [2024-07-25 01:28:27.388167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.955 [2024-07-25 01:28:27.388187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.955 [2024-07-25 01:28:27.388195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.955 [2024-07-25 01:28:27.396941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.955 [2024-07-25 01:28:27.396962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.955 [2024-07-25 01:28:27.396972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.955 [2024-07-25 01:28:27.409965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.955 [2024-07-25 01:28:27.409987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.955 [2024-07-25 01:28:27.409996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.955 [2024-07-25 01:28:27.419679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.955 [2024-07-25 01:28:27.419699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.955 [2024-07-25 01:28:27.419708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.955 [2024-07-25 01:28:27.428878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.955 [2024-07-25 01:28:27.428897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.955 [2024-07-25 01:28:27.428906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.955 [2024-07-25 01:28:27.438623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:04.955 [2024-07-25 01:28:27.438641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.955 [2024-07-25 01:28:27.438649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:05.215 [2024-07-25 01:28:27.448256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:05.215 [2024-07-25 01:28:27.448292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.215 [2024-07-25 01:28:27.448301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:05.215 [2024-07-25 01:28:27.457743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x115dfb0) 00:28:05.215 [2024-07-25 01:28:27.457762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.215 [2024-07-25 01:28:27.457770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:05.215 00:28:05.215 Latency(us) 00:28:05.215 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:05.215 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:05.216 nvme0n1 : 2.00 25641.63 100.16 0.00 0.00 4986.57 2421.98 26898.25 00:28:05.216 =================================================================================================================== 00:28:05.216 Total : 25641.63 100.16 0.00 0.00 4986.57 2421.98 26898.25 00:28:05.216 0 00:28:05.216 01:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:05.216 01:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:05.216 01:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:05.216 | .driver_specific 00:28:05.216 | .nvme_error 00:28:05.216 | .status_code 00:28:05.216 | .command_transient_transport_error' 00:28:05.216 01:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:05.216 01:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 201 > 0 )) 00:28:05.216 01:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1045878 00:28:05.216 01:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1045878 ']' 00:28:05.216 01:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1045878 00:28:05.216 01:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:05.216 01:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:05.216 01:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1045878 00:28:05.476 01:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:05.476 01:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:05.477 01:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1045878' 00:28:05.477 killing process with pid 1045878 00:28:05.477 01:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1045878 00:28:05.477 Received shutdown signal, test time was about 2.000000 seconds 00:28:05.477 00:28:05.477 Latency(us) 00:28:05.477 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:05.477 =================================================================================================================== 00:28:05.477 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:05.477 01:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1045878 00:28:05.477 01:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:05.477 01:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:05.477 01:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:05.477 01:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:05.477 01:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:05.477 01:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1046566 00:28:05.477 01:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1046566 /var/tmp/bperf.sock 00:28:05.477 01:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:05.477 01:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1046566 ']' 00:28:05.477 01:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:05.477 01:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:05.477 01:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:05.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:05.477 01:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:05.477 01:28:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:05.477 [2024-07-25 01:28:27.938665] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:28:05.477 [2024-07-25 01:28:27.938712] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1046566 ] 00:28:05.477 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:05.477 Zero copy mechanism will not be used. 00:28:05.477 EAL: No free 2048 kB hugepages reported on node 1 00:28:05.737 [2024-07-25 01:28:27.993696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.737 [2024-07-25 01:28:28.073708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.306 01:28:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:06.307 01:28:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:06.307 01:28:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:06.307 01:28:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:06.566 01:28:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:06.566 01:28:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.566 01:28:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:06.566 01:28:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.566 01:28:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:06.566 01:28:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:06.826 nvme0n1 00:28:07.086 01:28:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:07.086 01:28:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.086 01:28:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:07.086 01:28:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.086 01:28:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:07.086 01:28:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:07.086 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:07.086 Zero copy mechanism will not be used. 00:28:07.086 Running I/O for 2 seconds... 00:28:07.086 [2024-07-25 01:28:29.467810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.086 [2024-07-25 01:28:29.467842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.086 [2024-07-25 01:28:29.467852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.086 [2024-07-25 01:28:29.490262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.086 [2024-07-25 01:28:29.490285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.086 [2024-07-25 01:28:29.490293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.086 [2024-07-25 01:28:29.504188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.086 [2024-07-25 01:28:29.504210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.086 [2024-07-25 01:28:29.504218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.086 [2024-07-25 01:28:29.517810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.086 [2024-07-25 01:28:29.517831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.086 [2024-07-25 01:28:29.517839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.086 [2024-07-25 01:28:29.531182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.086 [2024-07-25 01:28:29.531202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.087 [2024-07-25 01:28:29.531211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.087 [2024-07-25 01:28:29.544731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.087 [2024-07-25 01:28:29.544755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.087 [2024-07-25 01:28:29.544763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.087 [2024-07-25 01:28:29.558206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.087 [2024-07-25 01:28:29.558225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.087 [2024-07-25 01:28:29.558233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.087 [2024-07-25 01:28:29.571858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.087 [2024-07-25 01:28:29.571877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.087 [2024-07-25 01:28:29.571885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.347 [2024-07-25 01:28:29.592499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.347 [2024-07-25 01:28:29.592520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.347 [2024-07-25 01:28:29.592528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.347 [2024-07-25 01:28:29.609998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.347 [2024-07-25 01:28:29.610019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.347 [2024-07-25 01:28:29.610026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.347 [2024-07-25 01:28:29.632009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.347 [2024-07-25 01:28:29.632028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.347 [2024-07-25 01:28:29.632035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.347 [2024-07-25 01:28:29.652907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.347 [2024-07-25 01:28:29.652926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.347 [2024-07-25 01:28:29.652934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.347 [2024-07-25 01:28:29.669725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.347 [2024-07-25 01:28:29.669745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.347 [2024-07-25 01:28:29.669752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.347 [2024-07-25 01:28:29.683240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.347 [2024-07-25 01:28:29.683260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.347 [2024-07-25 01:28:29.683267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.347 [2024-07-25 01:28:29.703471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.347 [2024-07-25 01:28:29.703491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.347 [2024-07-25 01:28:29.703498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.347 [2024-07-25 01:28:29.720920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.347 [2024-07-25 01:28:29.720940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.347 [2024-07-25 01:28:29.720947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.347 [2024-07-25 01:28:29.734792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.347 [2024-07-25 01:28:29.734811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.347 [2024-07-25 01:28:29.734819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.347 [2024-07-25 01:28:29.754805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.347 [2024-07-25 01:28:29.754825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.347 [2024-07-25 01:28:29.754832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.347 [2024-07-25 01:28:29.771947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.347 [2024-07-25 01:28:29.771966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.347 [2024-07-25 01:28:29.771973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.347 [2024-07-25 01:28:29.794246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.348 [2024-07-25 01:28:29.794266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.348 [2024-07-25 01:28:29.794274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.348 [2024-07-25 01:28:29.808662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.348 [2024-07-25 01:28:29.808683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.348 [2024-07-25 01:28:29.808690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.348 [2024-07-25 01:28:29.822183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.348 [2024-07-25 01:28:29.822203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.348 [2024-07-25 01:28:29.822210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.348 [2024-07-25 01:28:29.835645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.348 [2024-07-25 01:28:29.835665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.348 [2024-07-25 01:28:29.835675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.608 [2024-07-25 01:28:29.849124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.608 [2024-07-25 01:28:29.849144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.608 [2024-07-25 01:28:29.849151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.608 [2024-07-25 01:28:29.862523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.608 [2024-07-25 01:28:29.862542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.608 [2024-07-25 01:28:29.862550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.608 [2024-07-25 01:28:29.876020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.608 [2024-07-25 01:28:29.876040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.608 [2024-07-25 01:28:29.876055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.608 [2024-07-25 01:28:29.889449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.608 [2024-07-25 01:28:29.889468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.608 [2024-07-25 01:28:29.889475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.608 [2024-07-25 01:28:29.903058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.608 [2024-07-25 01:28:29.903077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.608 [2024-07-25 01:28:29.903084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.608 [2024-07-25 01:28:29.916448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.608 [2024-07-25 01:28:29.916469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.608 [2024-07-25 01:28:29.916477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.608 [2024-07-25 01:28:29.929890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.608 [2024-07-25 01:28:29.929909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.608 [2024-07-25 01:28:29.929917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.608 [2024-07-25 01:28:29.943453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.608 [2024-07-25 01:28:29.943472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.608 [2024-07-25 01:28:29.943480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.608 [2024-07-25 01:28:29.957015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.608 [2024-07-25 01:28:29.957035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.608 [2024-07-25 01:28:29.957048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.608 [2024-07-25 01:28:29.970595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.608 [2024-07-25 01:28:29.970614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.608 [2024-07-25 01:28:29.970622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.608 [2024-07-25 01:28:29.984222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.608 [2024-07-25 01:28:29.984242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.608 [2024-07-25 01:28:29.984250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.608 [2024-07-25 01:28:29.997722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.608 [2024-07-25 01:28:29.997742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.608 [2024-07-25 01:28:29.997750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.608 [2024-07-25 01:28:30.011297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.608 [2024-07-25 01:28:30.011317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.608 [2024-07-25 01:28:30.011324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.608 [2024-07-25 01:28:30.025008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.608 [2024-07-25 01:28:30.025030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.608 [2024-07-25 01:28:30.025037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.608 [2024-07-25 01:28:30.039143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.608 [2024-07-25 01:28:30.039166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.608 [2024-07-25 01:28:30.039175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.608 [2024-07-25 01:28:30.052750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.608 [2024-07-25 01:28:30.052771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.608 [2024-07-25 01:28:30.052780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.608 [2024-07-25 01:28:30.066557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.608 [2024-07-25 01:28:30.066577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.608 [2024-07-25 01:28:30.066590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.608 [2024-07-25 01:28:30.080054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.608 [2024-07-25 01:28:30.080074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.608 [2024-07-25 01:28:30.080081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.608 [2024-07-25 01:28:30.093571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.608 [2024-07-25 01:28:30.093591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.608 [2024-07-25 01:28:30.093598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.868 [2024-07-25 01:28:30.107021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.868 [2024-07-25 01:28:30.107047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.868 [2024-07-25 01:28:30.107055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.868 [2024-07-25 01:28:30.120456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.868 [2024-07-25 01:28:30.120476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.868 [2024-07-25 01:28:30.120484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.868 [2024-07-25 01:28:30.133966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.868 [2024-07-25 01:28:30.133985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.868 [2024-07-25 01:28:30.133993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.868 [2024-07-25 01:28:30.147409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.868 [2024-07-25 01:28:30.147429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.868 [2024-07-25 01:28:30.147437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.868 [2024-07-25 01:28:30.160916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.868 [2024-07-25 01:28:30.160935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.868 [2024-07-25 01:28:30.160943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.868 [2024-07-25 01:28:30.174547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.868 [2024-07-25 01:28:30.174566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.869 [2024-07-25 01:28:30.174574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.869 [2024-07-25 01:28:30.188010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.869 [2024-07-25 01:28:30.188033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.869 [2024-07-25 01:28:30.188040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.869 [2024-07-25 01:28:30.201570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.869 [2024-07-25 01:28:30.201589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.869 [2024-07-25 01:28:30.201596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.869 [2024-07-25 01:28:30.215191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.869 [2024-07-25 01:28:30.215210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.869 [2024-07-25 01:28:30.215217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.869 [2024-07-25 01:28:30.228612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.869 [2024-07-25 01:28:30.228631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.869 [2024-07-25 01:28:30.228638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.869 [2024-07-25 01:28:30.242018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.869 [2024-07-25 01:28:30.242037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.869 [2024-07-25 01:28:30.242050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.869 [2024-07-25 01:28:30.255493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.869 [2024-07-25 01:28:30.255513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.869 [2024-07-25 01:28:30.255520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.869 [2024-07-25 01:28:30.268846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.869 [2024-07-25 01:28:30.268865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.869 [2024-07-25 01:28:30.268872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.869 [2024-07-25 01:28:30.282251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.869 [2024-07-25 01:28:30.282271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.869 [2024-07-25 01:28:30.282278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.869 [2024-07-25 01:28:30.295945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.869 [2024-07-25 01:28:30.295965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.869 [2024-07-25 01:28:30.295972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.869 [2024-07-25 01:28:30.309497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.869 [2024-07-25 01:28:30.309516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.869 [2024-07-25 01:28:30.309523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.869 [2024-07-25 01:28:30.323017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.869 [2024-07-25 01:28:30.323037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.869 [2024-07-25 01:28:30.323049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.869 [2024-07-25 01:28:30.336527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.869 [2024-07-25 01:28:30.336547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.869 [2024-07-25 01:28:30.336554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.869 [2024-07-25 01:28:30.350035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:07.869 [2024-07-25 01:28:30.350059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.869 [2024-07-25 01:28:30.350066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.130 [2024-07-25 01:28:30.363467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.130 [2024-07-25 01:28:30.363489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.130 [2024-07-25 01:28:30.363497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.130 [2024-07-25 01:28:30.376937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.130 [2024-07-25 01:28:30.376959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.130 [2024-07-25 01:28:30.376968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.130 [2024-07-25 01:28:30.390370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.130 [2024-07-25 01:28:30.390391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.130 [2024-07-25 01:28:30.390398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.130 [2024-07-25 01:28:30.403804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.130 [2024-07-25 01:28:30.403825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.130 [2024-07-25 01:28:30.403833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.130 [2024-07-25 01:28:30.417296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.130 [2024-07-25 01:28:30.417317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.131 [2024-07-25 01:28:30.417328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.131 [2024-07-25 01:28:30.430754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.131 [2024-07-25 01:28:30.430774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.131 [2024-07-25 01:28:30.430782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.131 [2024-07-25 01:28:30.444473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.131 [2024-07-25 01:28:30.444493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.131 [2024-07-25 01:28:30.444500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.131 [2024-07-25 01:28:30.458000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.131 [2024-07-25 01:28:30.458021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.131 [2024-07-25 01:28:30.458029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.131 [2024-07-25 01:28:30.471918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.131 [2024-07-25 01:28:30.471939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.131 [2024-07-25 01:28:30.471947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.131 [2024-07-25 01:28:30.485674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.131 [2024-07-25 01:28:30.485694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.131 [2024-07-25 01:28:30.485702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.131 [2024-07-25 01:28:30.499264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.131 [2024-07-25 01:28:30.499283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.131 [2024-07-25 01:28:30.499291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.131 [2024-07-25 01:28:30.513350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.131 [2024-07-25 01:28:30.513370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.131 [2024-07-25 01:28:30.513377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.131 [2024-07-25 01:28:30.527301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.131 [2024-07-25 01:28:30.527320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.131 [2024-07-25 01:28:30.527328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.131 [2024-07-25 01:28:30.540962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.131 [2024-07-25 01:28:30.540982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.131 [2024-07-25 01:28:30.540990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.131 [2024-07-25 01:28:30.555123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.131 [2024-07-25 01:28:30.555143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.131 [2024-07-25 01:28:30.555151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.131 [2024-07-25 01:28:30.568941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.131 [2024-07-25 01:28:30.568961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.131 [2024-07-25 01:28:30.568968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.131 [2024-07-25 01:28:30.582909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.131 [2024-07-25 01:28:30.582930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.131 [2024-07-25 01:28:30.582937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.131 [2024-07-25 01:28:30.596914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.131 [2024-07-25 01:28:30.596934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.131 [2024-07-25 01:28:30.596941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.131 [2024-07-25 01:28:30.610474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.131 [2024-07-25 01:28:30.610494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.131 [2024-07-25 01:28:30.610502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.393 [2024-07-25 01:28:30.624265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.393 [2024-07-25 01:28:30.624286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.393 [2024-07-25 01:28:30.624294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.393 [2024-07-25 01:28:30.637908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.393 [2024-07-25 01:28:30.637928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.393 [2024-07-25 01:28:30.637937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.393 [2024-07-25 01:28:30.651520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.393 [2024-07-25 01:28:30.651541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.393 [2024-07-25 01:28:30.651553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.393 [2024-07-25 01:28:30.665255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.393 [2024-07-25 01:28:30.665276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.393 [2024-07-25 01:28:30.665284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.393 [2024-07-25 01:28:30.679010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.393 [2024-07-25 01:28:30.679030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.393 [2024-07-25 01:28:30.679039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.393 [2024-07-25 01:28:30.692699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.393 [2024-07-25 01:28:30.692720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.393 [2024-07-25 01:28:30.692729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.393 [2024-07-25 01:28:30.706390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.393 [2024-07-25 01:28:30.706411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.393 [2024-07-25 01:28:30.706420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.393 [2024-07-25 01:28:30.720067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.393 [2024-07-25 01:28:30.720088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.393 [2024-07-25 01:28:30.720096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.393 [2024-07-25 01:28:30.733921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.393 [2024-07-25 01:28:30.733940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.393 [2024-07-25 01:28:30.733948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.393 [2024-07-25 01:28:30.747528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.393 [2024-07-25 01:28:30.747548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.393 [2024-07-25 01:28:30.747555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.393 [2024-07-25 01:28:30.761226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.393 [2024-07-25 01:28:30.761245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.393 [2024-07-25 01:28:30.761253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.393 [2024-07-25 01:28:30.775268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.393 [2024-07-25 01:28:30.775291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.393 [2024-07-25 01:28:30.775299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.393 [2024-07-25 01:28:30.789169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.393 [2024-07-25 01:28:30.789188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.393 [2024-07-25 01:28:30.789195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.393 [2024-07-25 01:28:30.802765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.393 [2024-07-25 01:28:30.802784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.394 [2024-07-25 01:28:30.802792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.394 [2024-07-25 01:28:30.816169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.394 [2024-07-25 01:28:30.816189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.394 [2024-07-25 01:28:30.816196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.394 [2024-07-25 01:28:30.829549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.394 [2024-07-25 01:28:30.829568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.394 [2024-07-25 01:28:30.829575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.394 [2024-07-25 01:28:30.843066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.394 [2024-07-25 01:28:30.843085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.394 [2024-07-25 01:28:30.843094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.394 [2024-07-25 01:28:30.856472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.394 [2024-07-25 01:28:30.856492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.394 [2024-07-25 01:28:30.856500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.394 [2024-07-25 01:28:30.869854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.394 [2024-07-25 01:28:30.869873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.394 [2024-07-25 01:28:30.869881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.394 [2024-07-25 01:28:30.883350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.692 [2024-07-25 01:28:30.883371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.692 [2024-07-25 01:28:30.883380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.692 [2024-07-25 01:28:30.897024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.692 [2024-07-25 01:28:30.897051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.692 [2024-07-25 01:28:30.897059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.692 [2024-07-25 01:28:30.910437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.692 [2024-07-25 01:28:30.910456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.692 [2024-07-25 01:28:30.910463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.692 [2024-07-25 01:28:30.923917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.692 [2024-07-25 01:28:30.923937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.692 [2024-07-25 01:28:30.923944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.692 [2024-07-25 01:28:30.937429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.692 [2024-07-25 01:28:30.937448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.692 [2024-07-25 01:28:30.937456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.692 [2024-07-25 01:28:30.951560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.692 [2024-07-25 01:28:30.951581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.692 [2024-07-25 01:28:30.951588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.692 [2024-07-25 01:28:30.965349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.692 [2024-07-25 01:28:30.965369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.692 [2024-07-25 01:28:30.965376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.692 [2024-07-25 01:28:30.988971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.692 [2024-07-25 01:28:30.988990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.692 [2024-07-25 01:28:30.988997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.692 [2024-07-25 01:28:31.004545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.692 [2024-07-25 01:28:31.004564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.692 [2024-07-25 01:28:31.004572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.692 [2024-07-25 01:28:31.018415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.692 [2024-07-25 01:28:31.018434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.692 [2024-07-25 01:28:31.018445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.692 [2024-07-25 01:28:31.031916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.692 [2024-07-25 01:28:31.031935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.692 [2024-07-25 01:28:31.031942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.692 [2024-07-25 01:28:31.045429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.692 [2024-07-25 01:28:31.045449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.692 [2024-07-25 01:28:31.045456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.692 [2024-07-25 01:28:31.059791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.692 [2024-07-25 01:28:31.059811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.692 [2024-07-25 01:28:31.059818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.692 [2024-07-25 01:28:31.073720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.692 [2024-07-25 01:28:31.073740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.692 [2024-07-25 01:28:31.073747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.692 [2024-07-25 01:28:31.087489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.692 [2024-07-25 01:28:31.087508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.692 [2024-07-25 01:28:31.087516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.692 [2024-07-25 01:28:31.110684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.692 [2024-07-25 01:28:31.110704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.692 [2024-07-25 01:28:31.110711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.692 [2024-07-25 01:28:31.126009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.692 [2024-07-25 01:28:31.126028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.692 [2024-07-25 01:28:31.126035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.692 [2024-07-25 01:28:31.140418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.692 [2024-07-25 01:28:31.140438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.692 [2024-07-25 01:28:31.140446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.692 [2024-07-25 01:28:31.161506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.692 [2024-07-25 01:28:31.161526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.692 [2024-07-25 01:28:31.161533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.692 [2024-07-25 01:28:31.179627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.692 [2024-07-25 01:28:31.179646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.692 [2024-07-25 01:28:31.179653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.953 [2024-07-25 01:28:31.202885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.953 [2024-07-25 01:28:31.202906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.953 [2024-07-25 01:28:31.202914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.953 [2024-07-25 01:28:31.221329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.953 [2024-07-25 01:28:31.221348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.953 [2024-07-25 01:28:31.221356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.953 [2024-07-25 01:28:31.235245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.953 [2024-07-25 01:28:31.235264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.953 [2024-07-25 01:28:31.235271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.953 [2024-07-25 01:28:31.248905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.953 [2024-07-25 01:28:31.248924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.953 [2024-07-25 01:28:31.248931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.953 [2024-07-25 01:28:31.263025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.953 [2024-07-25 01:28:31.263049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.953 [2024-07-25 01:28:31.263057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.953 [2024-07-25 01:28:31.283079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.953 [2024-07-25 01:28:31.283098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.953 [2024-07-25 01:28:31.283105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.953 [2024-07-25 01:28:31.299671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.953 [2024-07-25 01:28:31.299691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.953 [2024-07-25 01:28:31.299705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.953 [2024-07-25 01:28:31.313263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.953 [2024-07-25 01:28:31.313282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.953 [2024-07-25 01:28:31.313289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.953 [2024-07-25 01:28:31.332359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.953 [2024-07-25 01:28:31.332378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.953 [2024-07-25 01:28:31.332385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.953 [2024-07-25 01:28:31.349774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.953 [2024-07-25 01:28:31.349793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.953 [2024-07-25 01:28:31.349801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.953 [2024-07-25 01:28:31.363430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.953 [2024-07-25 01:28:31.363449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.953 [2024-07-25 01:28:31.363456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.953 [2024-07-25 01:28:31.377236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.953 [2024-07-25 01:28:31.377255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.953 [2024-07-25 01:28:31.377263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.953 [2024-07-25 01:28:31.391251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.953 [2024-07-25 01:28:31.391270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.953 [2024-07-25 01:28:31.391278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.953 [2024-07-25 01:28:31.404786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.953 [2024-07-25 01:28:31.404805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.953 [2024-07-25 01:28:31.404812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.953 [2024-07-25 01:28:31.424625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.953 [2024-07-25 01:28:31.424645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.953 [2024-07-25 01:28:31.424652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.953 [2024-07-25 01:28:31.442507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e2e30) 00:28:08.953 [2024-07-25 01:28:31.442530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.953 [2024-07-25 01:28:31.442537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:09.213 00:28:09.213 Latency(us) 00:28:09.213 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:09.213 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:09.213 nvme0n1 : 2.01 2083.74 260.47 0.00 0.00 7672.17 6525.11 29405.72 00:28:09.213 =================================================================================================================== 00:28:09.213 Total : 2083.74 260.47 0.00 0.00 7672.17 6525.11 29405.72 00:28:09.213 0 00:28:09.213 01:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:09.213 01:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:09.213 01:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:09.213 | .driver_specific 00:28:09.213 | .nvme_error 00:28:09.213 | .status_code 00:28:09.213 | .command_transient_transport_error' 00:28:09.213 01:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:09.213 01:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 135 > 0 )) 00:28:09.213 01:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1046566 00:28:09.213 01:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1046566 ']' 00:28:09.213 01:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1046566 00:28:09.213 01:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:09.213 01:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:09.213 01:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1046566 00:28:09.213 01:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:09.213 01:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:09.213 01:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1046566' 00:28:09.213 killing process with pid 1046566 00:28:09.214 01:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1046566 00:28:09.214 Received shutdown signal, test time was about 2.000000 seconds 00:28:09.214 00:28:09.214 Latency(us) 00:28:09.214 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:09.214 =================================================================================================================== 00:28:09.214 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:09.214 01:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1046566 00:28:09.474 01:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:09.474 01:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:09.474 01:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:09.474 01:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:09.474 01:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:09.474 01:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1047102 00:28:09.474 01:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1047102 /var/tmp/bperf.sock 00:28:09.474 01:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:09.474 01:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1047102 ']' 00:28:09.474 01:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:09.474 01:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:09.474 01:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:09.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:09.474 01:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:09.474 01:28:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:09.474 [2024-07-25 01:28:31.931240] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:28:09.474 [2024-07-25 01:28:31.931302] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1047102 ] 00:28:09.474 EAL: No free 2048 kB hugepages reported on node 1 00:28:09.733 [2024-07-25 01:28:31.988584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.734 [2024-07-25 01:28:32.060243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:10.302 01:28:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:10.302 01:28:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:10.302 01:28:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:10.302 01:28:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:10.563 01:28:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:10.563 01:28:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.563 01:28:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:10.563 01:28:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.563 01:28:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:10.563 01:28:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:10.822 nvme0n1 00:28:10.822 01:28:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:10.822 01:28:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.822 01:28:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:10.822 01:28:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.822 01:28:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:10.822 01:28:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:11.092 Running I/O for 2 seconds... 00:28:11.092 [2024-07-25 01:28:33.401135] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190ff3c8 00:28:11.092 [2024-07-25 01:28:33.401854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.092 [2024-07-25 01:28:33.401888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:11.092 [2024-07-25 01:28:33.412216] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190e6738 00:28:11.092 [2024-07-25 01:28:33.412965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.092 [2024-07-25 01:28:33.412988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:11.092 [2024-07-25 01:28:33.421474] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190ebb98 00:28:11.092 [2024-07-25 01:28:33.422207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.092 [2024-07-25 01:28:33.422227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:11.092 [2024-07-25 01:28:33.430695] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190e6738 00:28:11.092 [2024-07-25 01:28:33.431491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.092 [2024-07-25 01:28:33.431510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:11.092 [2024-07-25 01:28:33.439857] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190ebb98 00:28:11.092 [2024-07-25 01:28:33.440627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.092 [2024-07-25 01:28:33.440647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:11.092 [2024-07-25 01:28:33.449078] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190e6738 00:28:11.092 [2024-07-25 01:28:33.449827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.092 [2024-07-25 01:28:33.449845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:11.092 [2024-07-25 01:28:33.458257] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190ebb98 00:28:11.092 [2024-07-25 01:28:33.459012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.092 [2024-07-25 01:28:33.459031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:11.092 [2024-07-25 01:28:33.467384] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190e6738 00:28:11.092 [2024-07-25 01:28:33.468164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.092 [2024-07-25 01:28:33.468182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:11.092 [2024-07-25 01:28:33.476622] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190ebb98 00:28:11.092 [2024-07-25 01:28:33.477501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.092 [2024-07-25 01:28:33.477520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:11.092 [2024-07-25 01:28:33.485772] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190e6738 00:28:11.092 [2024-07-25 01:28:33.486572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.092 [2024-07-25 01:28:33.486591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:11.092 [2024-07-25 01:28:33.494929] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190ebb98 00:28:11.092 [2024-07-25 01:28:33.495706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.092 [2024-07-25 01:28:33.495725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:11.092 [2024-07-25 01:28:33.504151] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190e6738 00:28:11.092 [2024-07-25 01:28:33.504837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.092 [2024-07-25 01:28:33.504855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:11.092 [2024-07-25 01:28:33.513290] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190ebb98 00:28:11.092 [2024-07-25 01:28:33.514003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.092 [2024-07-25 01:28:33.514022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:11.092 [2024-07-25 01:28:33.522459] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190e6738 00:28:11.092 [2024-07-25 01:28:33.523188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.092 [2024-07-25 01:28:33.523206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:11.092 [2024-07-25 01:28:33.531633] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190ebb98 00:28:11.092 [2024-07-25 01:28:33.532362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.092 [2024-07-25 01:28:33.532380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:11.092 [2024-07-25 01:28:33.540782] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190e6738 00:28:11.092 [2024-07-25 01:28:33.541505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.092 [2024-07-25 01:28:33.541523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:11.092 [2024-07-25 01:28:33.549986] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190ebb98 00:28:11.092 [2024-07-25 01:28:33.550744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.092 [2024-07-25 01:28:33.550761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:11.092 [2024-07-25 01:28:33.559123] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190e6738 00:28:11.092 [2024-07-25 01:28:33.559908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.092 [2024-07-25 01:28:33.559926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:11.092 [2024-07-25 01:28:33.568250] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190ebb98 00:28:11.092 [2024-07-25 01:28:33.569031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.092 [2024-07-25 01:28:33.569056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:11.092 [2024-07-25 01:28:33.577585] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190e6738 00:28:11.092 [2024-07-25 01:28:33.578385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.092 [2024-07-25 01:28:33.578403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:11.354 [2024-07-25 01:28:33.586875] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190ebb98 00:28:11.354 [2024-07-25 01:28:33.587711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.354 [2024-07-25 01:28:33.587729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:11.354 [2024-07-25 01:28:33.596135] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190e6738 00:28:11.354 [2024-07-25 01:28:33.596889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.354 [2024-07-25 01:28:33.596907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:11.354 [2024-07-25 01:28:33.605294] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190ebb98 00:28:11.354 [2024-07-25 01:28:33.606046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.354 [2024-07-25 01:28:33.606065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:11.354 [2024-07-25 01:28:33.614411] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190e6738 00:28:11.354 [2024-07-25 01:28:33.615188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.354 [2024-07-25 01:28:33.615206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:11.354 [2024-07-25 01:28:33.624663] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f7970 00:28:11.354 [2024-07-25 01:28:33.626844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.354 [2024-07-25 01:28:33.626861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:11.354 [2024-07-25 01:28:33.640830] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f7100 00:28:11.354 [2024-07-25 01:28:33.641601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.354 [2024-07-25 01:28:33.641621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:11.354 [2024-07-25 01:28:33.650328] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f7100 00:28:11.354 [2024-07-25 01:28:33.650993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.354 [2024-07-25 01:28:33.651014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:11.354 [2024-07-25 01:28:33.659947] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f7100 00:28:11.354 [2024-07-25 01:28:33.660175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.354 [2024-07-25 01:28:33.660192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:11.354 [2024-07-25 01:28:33.669601] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f7100 00:28:11.354 [2024-07-25 01:28:33.671584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.354 [2024-07-25 01:28:33.671602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:11.354 [2024-07-25 01:28:33.684956] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190feb58 00:28:11.354 [2024-07-25 01:28:33.686647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.354 [2024-07-25 01:28:33.686666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:11.354 [2024-07-25 01:28:33.695951] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190fc560 00:28:11.354 [2024-07-25 01:28:33.696705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.354 [2024-07-25 01:28:33.696722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.354 [2024-07-25 01:28:33.705484] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190fc560 00:28:11.354 [2024-07-25 01:28:33.705714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.354 [2024-07-25 01:28:33.705732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.354 [2024-07-25 01:28:33.716238] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f81e0 00:28:11.354 [2024-07-25 01:28:33.718448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.354 [2024-07-25 01:28:33.718466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:11.354 [2024-07-25 01:28:33.728692] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f7da8 00:28:11.354 [2024-07-25 01:28:33.729668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.354 [2024-07-25 01:28:33.729686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:11.354 [2024-07-25 01:28:33.738863] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190e6738 00:28:11.354 [2024-07-25 01:28:33.740365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:9316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.354 [2024-07-25 01:28:33.740384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:11.354 [2024-07-25 01:28:33.746563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f1430 00:28:11.354 [2024-07-25 01:28:33.748417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.354 [2024-07-25 01:28:33.748435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:11.354 [2024-07-25 01:28:33.757448] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f2d80 00:28:11.354 [2024-07-25 01:28:33.758354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.354 [2024-07-25 01:28:33.758372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:11.354 [2024-07-25 01:28:33.766490] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:11.354 [2024-07-25 01:28:33.767424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:18468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.354 [2024-07-25 01:28:33.767443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:11.354 [2024-07-25 01:28:33.775657] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f0ff8 00:28:11.354 [2024-07-25 01:28:33.776528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.354 [2024-07-25 01:28:33.776546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:11.354 [2024-07-25 01:28:33.784759] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190eee38 00:28:11.354 [2024-07-25 01:28:33.785817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.354 [2024-07-25 01:28:33.785835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:11.354 [2024-07-25 01:28:33.793967] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f46d0 00:28:11.354 [2024-07-25 01:28:33.794891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.354 [2024-07-25 01:28:33.794909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:11.354 [2024-07-25 01:28:33.803091] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f6cc8 00:28:11.354 [2024-07-25 01:28:33.803981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.354 [2024-07-25 01:28:33.803999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:11.354 [2024-07-25 01:28:33.812196] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190ee190 00:28:11.354 [2024-07-25 01:28:33.813096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:10446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.354 [2024-07-25 01:28:33.813114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:11.354 [2024-07-25 01:28:33.821349] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f6890 00:28:11.354 [2024-07-25 01:28:33.822257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.355 [2024-07-25 01:28:33.822276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:11.355 [2024-07-25 01:28:33.830477] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f2d80 00:28:11.355 [2024-07-25 01:28:33.831380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.355 [2024-07-25 01:28:33.831400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:11.355 [2024-07-25 01:28:33.839562] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:11.355 [2024-07-25 01:28:33.840499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.355 [2024-07-25 01:28:33.840518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:11.615 [2024-07-25 01:28:33.849029] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f0ff8 00:28:11.615 [2024-07-25 01:28:33.849952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.615 [2024-07-25 01:28:33.849971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:11.615 [2024-07-25 01:28:33.858226] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190eee38 00:28:11.615 [2024-07-25 01:28:33.859295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.615 [2024-07-25 01:28:33.859313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:11.615 [2024-07-25 01:28:33.867342] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f46d0 00:28:11.615 [2024-07-25 01:28:33.868236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.615 [2024-07-25 01:28:33.868256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:11.615 [2024-07-25 01:28:33.876657] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190ed0b0 00:28:11.615 [2024-07-25 01:28:33.878904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.615 [2024-07-25 01:28:33.878922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:11.615 [2024-07-25 01:28:33.890701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f7970 00:28:11.615 [2024-07-25 01:28:33.891996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.615 [2024-07-25 01:28:33.892016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:11.615 [2024-07-25 01:28:33.899939] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190ec840 00:28:11.615 [2024-07-25 01:28:33.900827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.615 [2024-07-25 01:28:33.900845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:11.615 [2024-07-25 01:28:33.909145] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f7100 00:28:11.615 [2024-07-25 01:28:33.910639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.615 [2024-07-25 01:28:33.910660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:11.615 [2024-07-25 01:28:33.918412] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190ebfd0 00:28:11.615 [2024-07-25 01:28:33.919255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.615 [2024-07-25 01:28:33.919274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:11.616 [2024-07-25 01:28:33.927712] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190fcdd0 00:28:11.616 [2024-07-25 01:28:33.928631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.616 [2024-07-25 01:28:33.928650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:11.616 [2024-07-25 01:28:33.936812] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190fc998 00:28:11.616 [2024-07-25 01:28:33.937743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.616 [2024-07-25 01:28:33.937762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:11.616 [2024-07-25 01:28:33.946326] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f1430 00:28:11.616 [2024-07-25 01:28:33.948460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.616 [2024-07-25 01:28:33.948479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.616 [2024-07-25 01:28:33.960942] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8618 00:28:11.616 [2024-07-25 01:28:33.962574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.616 [2024-07-25 01:28:33.962592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.616 [2024-07-25 01:28:33.971428] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8618 00:28:11.616 [2024-07-25 01:28:33.971673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.616 [2024-07-25 01:28:33.971691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:11.616 [2024-07-25 01:28:33.980945] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8618 00:28:11.616 [2024-07-25 01:28:33.981369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.616 [2024-07-25 01:28:33.981387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:11.616 [2024-07-25 01:28:33.990483] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8618 00:28:11.616 [2024-07-25 01:28:33.990706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.616 [2024-07-25 01:28:33.990723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:11.616 [2024-07-25 01:28:33.999959] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8618 00:28:11.616 [2024-07-25 01:28:34.000609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:11328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.616 [2024-07-25 01:28:34.000627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:11.616 [2024-07-25 01:28:34.009491] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8618 00:28:11.616 [2024-07-25 01:28:34.009716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.616 [2024-07-25 01:28:34.009734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:11.616 [2024-07-25 01:28:34.019031] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8618 00:28:11.616 [2024-07-25 01:28:34.019448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.616 [2024-07-25 01:28:34.019467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:11.616 [2024-07-25 01:28:34.029260] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f31b8 00:28:11.616 [2024-07-25 01:28:34.031385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.616 [2024-07-25 01:28:34.031404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.616 [2024-07-25 01:28:34.042997] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190fda78 00:28:11.616 [2024-07-25 01:28:34.043948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.616 [2024-07-25 01:28:34.043966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.616 [2024-07-25 01:28:34.052679] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:11.616 [2024-07-25 01:28:34.052921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.616 [2024-07-25 01:28:34.052938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:11.616 [2024-07-25 01:28:34.062190] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:11.616 [2024-07-25 01:28:34.062415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.616 [2024-07-25 01:28:34.062433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:11.616 [2024-07-25 01:28:34.071711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:11.616 [2024-07-25 01:28:34.071934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.616 [2024-07-25 01:28:34.071953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:11.616 [2024-07-25 01:28:34.081208] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:11.616 [2024-07-25 01:28:34.081439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.616 [2024-07-25 01:28:34.081457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:11.616 [2024-07-25 01:28:34.090735] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:11.616 [2024-07-25 01:28:34.090969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.616 [2024-07-25 01:28:34.090987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:11.616 [2024-07-25 01:28:34.100249] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:11.616 [2024-07-25 01:28:34.100481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:9 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.616 [2024-07-25 01:28:34.100499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:11.877 [2024-07-25 01:28:34.110008] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:11.877 [2024-07-25 01:28:34.110267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.877 [2024-07-25 01:28:34.110286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:11.877 [2024-07-25 01:28:34.119635] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:11.877 [2024-07-25 01:28:34.119871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.877 [2024-07-25 01:28:34.119890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:11.877 [2024-07-25 01:28:34.129145] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:11.877 [2024-07-25 01:28:34.129378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.877 [2024-07-25 01:28:34.129396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:11.877 [2024-07-25 01:28:34.138644] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:11.877 [2024-07-25 01:28:34.138876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.877 [2024-07-25 01:28:34.138895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:11.877 [2024-07-25 01:28:34.148237] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:11.877 [2024-07-25 01:28:34.148469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.877 [2024-07-25 01:28:34.148487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:11.877 [2024-07-25 01:28:34.157703] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:11.877 [2024-07-25 01:28:34.157936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.877 [2024-07-25 01:28:34.157954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:11.877 [2024-07-25 01:28:34.167235] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:11.877 [2024-07-25 01:28:34.167473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.877 [2024-07-25 01:28:34.167495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:11.877 [2024-07-25 01:28:34.176923] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:11.877 [2024-07-25 01:28:34.177164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.877 [2024-07-25 01:28:34.177183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:11.877 [2024-07-25 01:28:34.186402] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:11.877 [2024-07-25 01:28:34.186633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.877 [2024-07-25 01:28:34.186651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:11.877 [2024-07-25 01:28:34.195925] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:11.877 [2024-07-25 01:28:34.196162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.877 [2024-07-25 01:28:34.196180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:11.877 [2024-07-25 01:28:34.205419] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:11.877 [2024-07-25 01:28:34.205650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.877 [2024-07-25 01:28:34.205667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:11.877 [2024-07-25 01:28:34.214929] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:11.877 [2024-07-25 01:28:34.215163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.877 [2024-07-25 01:28:34.215182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:11.877 [2024-07-25 01:28:34.224456] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:11.877 [2024-07-25 01:28:34.224688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.877 [2024-07-25 01:28:34.224705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:11.877 [2024-07-25 01:28:34.233919] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:11.877 [2024-07-25 01:28:34.234152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.877 [2024-07-25 01:28:34.234171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:11.877 [2024-07-25 01:28:34.243448] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:11.877 [2024-07-25 01:28:34.243682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.877 [2024-07-25 01:28:34.243700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:11.877 [2024-07-25 01:28:34.252962] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:11.877 [2024-07-25 01:28:34.253206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.877 [2024-07-25 01:28:34.253224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:11.877 [2024-07-25 01:28:34.262427] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:11.877 [2024-07-25 01:28:34.262660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.877 [2024-07-25 01:28:34.262678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:11.877 [2024-07-25 01:28:34.271971] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:11.877 [2024-07-25 01:28:34.272208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.878 [2024-07-25 01:28:34.272226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:11.878 [2024-07-25 01:28:34.281465] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:11.878 [2024-07-25 01:28:34.281696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.878 [2024-07-25 01:28:34.281716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:11.878 [2024-07-25 01:28:34.290974] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:11.878 [2024-07-25 01:28:34.291242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.878 [2024-07-25 01:28:34.291262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:11.878 [2024-07-25 01:28:34.300502] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:11.878 [2024-07-25 01:28:34.300758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.878 [2024-07-25 01:28:34.300777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:11.878 [2024-07-25 01:28:34.310065] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:11.878 [2024-07-25 01:28:34.310300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.878 [2024-07-25 01:28:34.310317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:11.878 [2024-07-25 01:28:34.319587] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:11.878 [2024-07-25 01:28:34.319819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.878 [2024-07-25 01:28:34.319837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:11.878 [2024-07-25 01:28:34.329098] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:11.878 [2024-07-25 01:28:34.329331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.878 [2024-07-25 01:28:34.329348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:11.878 [2024-07-25 01:28:34.338599] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:11.878 [2024-07-25 01:28:34.338831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.878 [2024-07-25 01:28:34.338850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:11.878 [2024-07-25 01:28:34.348207] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:11.878 [2024-07-25 01:28:34.348442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.878 [2024-07-25 01:28:34.348460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:11.878 [2024-07-25 01:28:34.357751] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:11.878 [2024-07-25 01:28:34.357985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.878 [2024-07-25 01:28:34.358002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:11.878 [2024-07-25 01:28:34.367436] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:11.878 [2024-07-25 01:28:34.367694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.878 [2024-07-25 01:28:34.367712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.139 [2024-07-25 01:28:34.377187] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.139 [2024-07-25 01:28:34.377421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.139 [2024-07-25 01:28:34.377440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.139 [2024-07-25 01:28:34.386658] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.139 [2024-07-25 01:28:34.386897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.139 [2024-07-25 01:28:34.386915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.139 [2024-07-25 01:28:34.396168] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.139 [2024-07-25 01:28:34.396402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.139 [2024-07-25 01:28:34.396420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.139 [2024-07-25 01:28:34.405673] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.139 [2024-07-25 01:28:34.405910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.139 [2024-07-25 01:28:34.405927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.139 [2024-07-25 01:28:34.415240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.139 [2024-07-25 01:28:34.415472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.139 [2024-07-25 01:28:34.415493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.139 [2024-07-25 01:28:34.424829] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.139 [2024-07-25 01:28:34.425086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.139 [2024-07-25 01:28:34.425106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.139 [2024-07-25 01:28:34.434714] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.139 [2024-07-25 01:28:34.434969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.139 [2024-07-25 01:28:34.434986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.139 [2024-07-25 01:28:34.444302] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.139 [2024-07-25 01:28:34.444558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.139 [2024-07-25 01:28:34.444576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.139 [2024-07-25 01:28:34.453885] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.139 [2024-07-25 01:28:34.454122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.139 [2024-07-25 01:28:34.454140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.139 [2024-07-25 01:28:34.463435] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.139 [2024-07-25 01:28:34.463666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.139 [2024-07-25 01:28:34.463684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.139 [2024-07-25 01:28:34.472960] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.139 [2024-07-25 01:28:34.473200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.139 [2024-07-25 01:28:34.473219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.139 [2024-07-25 01:28:34.482520] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.139 [2024-07-25 01:28:34.482753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.139 [2024-07-25 01:28:34.482770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.139 [2024-07-25 01:28:34.491989] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.139 [2024-07-25 01:28:34.492228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.139 [2024-07-25 01:28:34.492246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.139 [2024-07-25 01:28:34.501576] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.139 [2024-07-25 01:28:34.501812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.139 [2024-07-25 01:28:34.501833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.139 [2024-07-25 01:28:34.511100] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.139 [2024-07-25 01:28:34.511336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.139 [2024-07-25 01:28:34.511353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.139 [2024-07-25 01:28:34.520610] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.139 [2024-07-25 01:28:34.520839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.139 [2024-07-25 01:28:34.520857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.139 [2024-07-25 01:28:34.530148] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.139 [2024-07-25 01:28:34.530398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.139 [2024-07-25 01:28:34.530415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.139 [2024-07-25 01:28:34.539686] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.139 [2024-07-25 01:28:34.539919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.139 [2024-07-25 01:28:34.539937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.139 [2024-07-25 01:28:34.549335] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.139 [2024-07-25 01:28:34.549571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.139 [2024-07-25 01:28:34.549589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.139 [2024-07-25 01:28:34.558900] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.139 [2024-07-25 01:28:34.559135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.139 [2024-07-25 01:28:34.559153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.139 [2024-07-25 01:28:34.568411] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.139 [2024-07-25 01:28:34.568643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.139 [2024-07-25 01:28:34.568661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.139 [2024-07-25 01:28:34.577954] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.139 [2024-07-25 01:28:34.578193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.139 [2024-07-25 01:28:34.578212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.139 [2024-07-25 01:28:34.587461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.139 [2024-07-25 01:28:34.587690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.139 [2024-07-25 01:28:34.587707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.139 [2024-07-25 01:28:34.596920] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.139 [2024-07-25 01:28:34.597169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.139 [2024-07-25 01:28:34.597187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.139 [2024-07-25 01:28:34.606477] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.139 [2024-07-25 01:28:34.606710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.139 [2024-07-25 01:28:34.606728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.139 [2024-07-25 01:28:34.615952] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.139 [2024-07-25 01:28:34.616215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.139 [2024-07-25 01:28:34.616244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.140 [2024-07-25 01:28:34.625586] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.140 [2024-07-25 01:28:34.625824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.140 [2024-07-25 01:28:34.625842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.400 [2024-07-25 01:28:34.635393] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.400 [2024-07-25 01:28:34.635632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.400 [2024-07-25 01:28:34.635651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.400 [2024-07-25 01:28:34.645186] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.400 [2024-07-25 01:28:34.645445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.400 [2024-07-25 01:28:34.645463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.400 [2024-07-25 01:28:34.654795] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.400 [2024-07-25 01:28:34.655032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.400 [2024-07-25 01:28:34.655059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.400 [2024-07-25 01:28:34.664363] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.400 [2024-07-25 01:28:34.664597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.400 [2024-07-25 01:28:34.664615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.400 [2024-07-25 01:28:34.673838] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.400 [2024-07-25 01:28:34.674073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.400 [2024-07-25 01:28:34.674091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.400 [2024-07-25 01:28:34.683691] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.400 [2024-07-25 01:28:34.683928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.400 [2024-07-25 01:28:34.683947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.400 [2024-07-25 01:28:34.693337] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.400 [2024-07-25 01:28:34.693573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.400 [2024-07-25 01:28:34.693590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.400 [2024-07-25 01:28:34.703057] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.400 [2024-07-25 01:28:34.703293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.400 [2024-07-25 01:28:34.703312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.400 [2024-07-25 01:28:34.712759] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.400 [2024-07-25 01:28:34.712995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.400 [2024-07-25 01:28:34.713013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.400 [2024-07-25 01:28:34.722301] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.400 [2024-07-25 01:28:34.722547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.400 [2024-07-25 01:28:34.722566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.400 [2024-07-25 01:28:34.731881] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.400 [2024-07-25 01:28:34.732121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.400 [2024-07-25 01:28:34.732140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.400 [2024-07-25 01:28:34.741581] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.400 [2024-07-25 01:28:34.741821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.401 [2024-07-25 01:28:34.741839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.401 [2024-07-25 01:28:34.751144] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.401 [2024-07-25 01:28:34.751381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.401 [2024-07-25 01:28:34.751404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.401 [2024-07-25 01:28:34.760704] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.401 [2024-07-25 01:28:34.760938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.401 [2024-07-25 01:28:34.760955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.401 [2024-07-25 01:28:34.770276] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.401 [2024-07-25 01:28:34.770505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.401 [2024-07-25 01:28:34.770523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.401 [2024-07-25 01:28:34.779810] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.401 [2024-07-25 01:28:34.780051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.401 [2024-07-25 01:28:34.780069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.401 [2024-07-25 01:28:34.789376] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.401 [2024-07-25 01:28:34.789611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.401 [2024-07-25 01:28:34.789628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.401 [2024-07-25 01:28:34.798909] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.401 [2024-07-25 01:28:34.799162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.401 [2024-07-25 01:28:34.799181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.401 [2024-07-25 01:28:34.808502] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.401 [2024-07-25 01:28:34.808734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.401 [2024-07-25 01:28:34.808752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.401 [2024-07-25 01:28:34.818056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.401 [2024-07-25 01:28:34.818310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.401 [2024-07-25 01:28:34.818328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.401 [2024-07-25 01:28:34.827604] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.401 [2024-07-25 01:28:34.827901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.401 [2024-07-25 01:28:34.827919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.401 [2024-07-25 01:28:34.837166] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.401 [2024-07-25 01:28:34.837401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.401 [2024-07-25 01:28:34.837419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.401 [2024-07-25 01:28:34.846730] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.401 [2024-07-25 01:28:34.846983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.401 [2024-07-25 01:28:34.847000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.401 [2024-07-25 01:28:34.856293] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.401 [2024-07-25 01:28:34.856530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.401 [2024-07-25 01:28:34.856548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.401 [2024-07-25 01:28:34.865800] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.401 [2024-07-25 01:28:34.866035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.401 [2024-07-25 01:28:34.866058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.401 [2024-07-25 01:28:34.875360] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.401 [2024-07-25 01:28:34.875616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.401 [2024-07-25 01:28:34.875634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.401 [2024-07-25 01:28:34.884917] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.401 [2024-07-25 01:28:34.885143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.401 [2024-07-25 01:28:34.885162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.662 [2024-07-25 01:28:34.894750] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.662 [2024-07-25 01:28:34.894986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.662 [2024-07-25 01:28:34.895005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.662 [2024-07-25 01:28:34.904509] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.662 [2024-07-25 01:28:34.904762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.662 [2024-07-25 01:28:34.904781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.662 [2024-07-25 01:28:34.914336] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.662 [2024-07-25 01:28:34.914576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.662 [2024-07-25 01:28:34.914593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.662 [2024-07-25 01:28:34.924111] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.662 [2024-07-25 01:28:34.924351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.662 [2024-07-25 01:28:34.924369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.662 [2024-07-25 01:28:34.933849] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.662 [2024-07-25 01:28:34.934105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.662 [2024-07-25 01:28:34.934123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.662 [2024-07-25 01:28:34.943660] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.662 [2024-07-25 01:28:34.943893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.662 [2024-07-25 01:28:34.943912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.662 [2024-07-25 01:28:34.953223] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.662 [2024-07-25 01:28:34.953457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.662 [2024-07-25 01:28:34.953475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.662 [2024-07-25 01:28:34.962823] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.662 [2024-07-25 01:28:34.963076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.662 [2024-07-25 01:28:34.963094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.662 [2024-07-25 01:28:34.972486] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.662 [2024-07-25 01:28:34.972722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.662 [2024-07-25 01:28:34.972740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.662 [2024-07-25 01:28:34.981955] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.662 [2024-07-25 01:28:34.982212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.662 [2024-07-25 01:28:34.982231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.662 [2024-07-25 01:28:34.991566] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.662 [2024-07-25 01:28:34.991800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.662 [2024-07-25 01:28:34.991818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.662 [2024-07-25 01:28:35.001089] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.662 [2024-07-25 01:28:35.001322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.662 [2024-07-25 01:28:35.001343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.662 [2024-07-25 01:28:35.010726] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.662 [2024-07-25 01:28:35.010960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.662 [2024-07-25 01:28:35.010978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.662 [2024-07-25 01:28:35.020397] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.662 [2024-07-25 01:28:35.020630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.662 [2024-07-25 01:28:35.020648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.662 [2024-07-25 01:28:35.029899] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.662 [2024-07-25 01:28:35.030135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.662 [2024-07-25 01:28:35.030153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.662 [2024-07-25 01:28:35.039518] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.662 [2024-07-25 01:28:35.039750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.662 [2024-07-25 01:28:35.039767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.662 [2024-07-25 01:28:35.049115] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.662 [2024-07-25 01:28:35.049340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.662 [2024-07-25 01:28:35.049358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.662 [2024-07-25 01:28:35.058630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.663 [2024-07-25 01:28:35.058878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.663 [2024-07-25 01:28:35.058896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.663 [2024-07-25 01:28:35.068209] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.663 [2024-07-25 01:28:35.068442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.663 [2024-07-25 01:28:35.068459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.663 [2024-07-25 01:28:35.077713] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.663 [2024-07-25 01:28:35.077946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.663 [2024-07-25 01:28:35.077964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.663 [2024-07-25 01:28:35.087261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.663 [2024-07-25 01:28:35.087500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.663 [2024-07-25 01:28:35.087518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.663 [2024-07-25 01:28:35.096801] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.663 [2024-07-25 01:28:35.097035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.663 [2024-07-25 01:28:35.097056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.663 [2024-07-25 01:28:35.106335] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.663 [2024-07-25 01:28:35.106588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.663 [2024-07-25 01:28:35.106606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.663 [2024-07-25 01:28:35.115929] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.663 [2024-07-25 01:28:35.116169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.663 [2024-07-25 01:28:35.116188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.663 [2024-07-25 01:28:35.125432] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.663 [2024-07-25 01:28:35.125664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.663 [2024-07-25 01:28:35.125681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.663 [2024-07-25 01:28:35.134974] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.663 [2024-07-25 01:28:35.135232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.663 [2024-07-25 01:28:35.135251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.663 [2024-07-25 01:28:35.144616] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.663 [2024-07-25 01:28:35.144849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.663 [2024-07-25 01:28:35.144867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.924 [2024-07-25 01:28:35.154418] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.924 [2024-07-25 01:28:35.154657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.924 [2024-07-25 01:28:35.154675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.924 [2024-07-25 01:28:35.164066] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.924 [2024-07-25 01:28:35.164303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.924 [2024-07-25 01:28:35.164321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.924 [2024-07-25 01:28:35.173595] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.924 [2024-07-25 01:28:35.173845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.924 [2024-07-25 01:28:35.173863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.924 [2024-07-25 01:28:35.183156] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.924 [2024-07-25 01:28:35.184000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.924 [2024-07-25 01:28:35.184018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.924 [2024-07-25 01:28:35.192934] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.924 [2024-07-25 01:28:35.193163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.924 [2024-07-25 01:28:35.193181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:12.924 [2024-07-25 01:28:35.203362] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8618 00:28:12.924 [2024-07-25 01:28:35.204888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.924 [2024-07-25 01:28:35.204905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:12.924 [2024-07-25 01:28:35.216275] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f8a50 00:28:12.924 [2024-07-25 01:28:35.217210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.924 [2024-07-25 01:28:35.217228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:12.924 [2024-07-25 01:28:35.225706] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190ecc78 00:28:12.924 [2024-07-25 01:28:35.226988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.924 [2024-07-25 01:28:35.227007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:12.924 [2024-07-25 01:28:35.234133] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f5be8 00:28:12.924 [2024-07-25 01:28:35.235023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.924 [2024-07-25 01:28:35.235041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:12.924 [2024-07-25 01:28:35.243335] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190ec408 00:28:12.924 [2024-07-25 01:28:35.244187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.924 [2024-07-25 01:28:35.244214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:12.924 [2024-07-25 01:28:35.252496] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f4b08 00:28:12.924 [2024-07-25 01:28:35.254339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.924 [2024-07-25 01:28:35.254360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:12.924 [2024-07-25 01:28:35.267087] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190fd208 00:28:12.924 [2024-07-25 01:28:35.268091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.924 [2024-07-25 01:28:35.268109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.924 [2024-07-25 01:28:35.276840] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f3a28 00:28:12.924 [2024-07-25 01:28:35.277996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.924 [2024-07-25 01:28:35.278014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:12.924 [2024-07-25 01:28:35.286018] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f7538 00:28:12.924 [2024-07-25 01:28:35.287096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.924 [2024-07-25 01:28:35.287114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:12.925 [2024-07-25 01:28:35.294886] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f6458 00:28:12.925 [2024-07-25 01:28:35.296650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.925 [2024-07-25 01:28:35.296668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:12.925 [2024-07-25 01:28:35.308855] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f0350 00:28:12.925 [2024-07-25 01:28:35.310087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.925 [2024-07-25 01:28:35.310106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:12.925 [2024-07-25 01:28:35.318802] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f6020 00:28:12.925 [2024-07-25 01:28:35.319068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.925 [2024-07-25 01:28:35.319087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:12.925 [2024-07-25 01:28:35.328371] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f6020 00:28:12.925 [2024-07-25 01:28:35.329126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.925 [2024-07-25 01:28:35.329145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:12.925 [2024-07-25 01:28:35.339903] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f4298 00:28:12.925 [2024-07-25 01:28:35.341060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.925 [2024-07-25 01:28:35.341078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:12.925 [2024-07-25 01:28:35.350282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f6020 00:28:12.925 [2024-07-25 01:28:35.351061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.925 [2024-07-25 01:28:35.351081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:12.925 [2024-07-25 01:28:35.359994] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f6020 00:28:12.925 [2024-07-25 01:28:35.360248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.925 [2024-07-25 01:28:35.360266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.925 [2024-07-25 01:28:35.369509] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0270) with pdu=0x2000190f6020 00:28:12.925 [2024-07-25 01:28:35.369765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.925 [2024-07-25 01:28:35.369784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.925 00:28:12.925 Latency(us) 00:28:12.925 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:12.925 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:12.925 nvme0n1 : 2.00 25862.22 101.02 0.00 0.00 4940.06 2493.22 31685.23 00:28:12.925 =================================================================================================================== 00:28:12.925 Total : 25862.22 101.02 0.00 0.00 4940.06 2493.22 31685.23 00:28:12.925 0 00:28:12.925 01:28:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:12.925 01:28:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:12.925 01:28:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:12.925 | .driver_specific 00:28:12.925 | .nvme_error 00:28:12.925 | .status_code 00:28:12.925 | .command_transient_transport_error' 00:28:12.925 01:28:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:13.186 01:28:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 203 > 0 )) 00:28:13.186 01:28:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1047102 00:28:13.186 01:28:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1047102 ']' 00:28:13.186 01:28:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1047102 00:28:13.186 01:28:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:13.186 01:28:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:13.186 01:28:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1047102 00:28:13.186 01:28:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:13.186 01:28:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:13.186 01:28:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1047102' 00:28:13.186 killing process with pid 1047102 00:28:13.186 01:28:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1047102 00:28:13.186 Received shutdown signal, test time was about 2.000000 seconds 00:28:13.186 00:28:13.186 Latency(us) 00:28:13.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:13.186 =================================================================================================================== 00:28:13.186 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:13.186 01:28:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1047102 00:28:13.446 01:28:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:13.446 01:28:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:13.446 01:28:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:13.446 01:28:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:13.446 01:28:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:13.446 01:28:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1047756 00:28:13.446 01:28:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1047756 /var/tmp/bperf.sock 00:28:13.446 01:28:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:13.446 01:28:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1047756 ']' 00:28:13.446 01:28:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:13.446 01:28:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:13.446 01:28:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:13.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:13.446 01:28:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:13.446 01:28:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:13.446 [2024-07-25 01:28:35.843951] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:28:13.446 [2024-07-25 01:28:35.843997] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1047756 ] 00:28:13.446 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:13.446 Zero copy mechanism will not be used. 00:28:13.446 EAL: No free 2048 kB hugepages reported on node 1 00:28:13.446 [2024-07-25 01:28:35.898280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.707 [2024-07-25 01:28:35.966736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:14.277 01:28:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:14.277 01:28:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:14.277 01:28:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:14.277 01:28:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:14.538 01:28:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:14.538 01:28:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.538 01:28:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:14.538 01:28:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.538 01:28:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:14.538 01:28:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:14.798 nvme0n1 00:28:14.798 01:28:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:14.798 01:28:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.798 01:28:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:14.798 01:28:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.798 01:28:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:14.799 01:28:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:14.799 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:14.799 Zero copy mechanism will not be used. 00:28:14.799 Running I/O for 2 seconds... 00:28:15.060 [2024-07-25 01:28:37.338111] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:15.060 [2024-07-25 01:28:37.338809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.060 [2024-07-25 01:28:37.338839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.060 [2024-07-25 01:28:37.357803] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:15.060 [2024-07-25 01:28:37.358363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.060 [2024-07-25 01:28:37.358388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.060 [2024-07-25 01:28:37.378982] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:15.061 [2024-07-25 01:28:37.379588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.061 [2024-07-25 01:28:37.379610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.061 [2024-07-25 01:28:37.399576] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:15.061 [2024-07-25 01:28:37.400277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.061 [2024-07-25 01:28:37.400306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.061 [2024-07-25 01:28:37.420062] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:15.061 [2024-07-25 01:28:37.420751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.061 [2024-07-25 01:28:37.420772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.061 [2024-07-25 01:28:37.440067] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:15.061 [2024-07-25 01:28:37.440811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.061 [2024-07-25 01:28:37.440831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.061 [2024-07-25 01:28:37.462440] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:15.061 [2024-07-25 01:28:37.462849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.061 [2024-07-25 01:28:37.462869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.061 [2024-07-25 01:28:37.483167] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:15.061 [2024-07-25 01:28:37.483847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.061 [2024-07-25 01:28:37.483866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.061 [2024-07-25 01:28:37.501671] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:15.061 [2024-07-25 01:28:37.502317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.061 [2024-07-25 01:28:37.502336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.061 [2024-07-25 01:28:37.521288] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:15.061 [2024-07-25 01:28:37.522182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.061 [2024-07-25 01:28:37.522201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.061 [2024-07-25 01:28:37.540728] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:15.061 [2024-07-25 01:28:37.541417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.061 [2024-07-25 01:28:37.541437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.320 [2024-07-25 01:28:37.563298] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:15.320 [2024-07-25 01:28:37.563887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.320 [2024-07-25 01:28:37.563907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.320 [2024-07-25 01:28:37.584110] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:15.320 [2024-07-25 01:28:37.584743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.320 [2024-07-25 01:28:37.584761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.320 [2024-07-25 01:28:37.605847] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:15.320 [2024-07-25 01:28:37.606332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.320 [2024-07-25 01:28:37.606351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.320 [2024-07-25 01:28:37.625594] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:15.320 [2024-07-25 01:28:37.626375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.320 [2024-07-25 01:28:37.626394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.320 [2024-07-25 01:28:37.646429] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:15.320 [2024-07-25 01:28:37.647144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.320 [2024-07-25 01:28:37.647169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.320 [2024-07-25 01:28:37.667426] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:15.320 [2024-07-25 01:28:37.667968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.320 [2024-07-25 01:28:37.667988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.320 [2024-07-25 01:28:37.688759] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:15.320 [2024-07-25 01:28:37.689484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.320 [2024-07-25 01:28:37.689502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.320 [2024-07-25 01:28:37.710259] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:15.320 [2024-07-25 01:28:37.710820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.320 [2024-07-25 01:28:37.710839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.321 [2024-07-25 01:28:37.730967] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:15.321 [2024-07-25 01:28:37.731671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.321 [2024-07-25 01:28:37.731690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.321 [2024-07-25 01:28:37.751180] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:15.321 [2024-07-25 01:28:37.751972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.321 [2024-07-25 01:28:37.751991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.321 [2024-07-25 01:28:37.774036] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:15.321 [2024-07-25 01:28:37.774846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.321 [2024-07-25 01:28:37.774864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.321 [2024-07-25 01:28:37.796620] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:15.321 [2024-07-25 01:28:37.797499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.321 [2024-07-25 01:28:37.797517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.581 [2024-07-25 01:28:37.818928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:15.581 [2024-07-25 01:28:37.819666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.581 [2024-07-25 01:28:37.819687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.581 [2024-07-25 01:28:37.841765] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:15.581 [2024-07-25 01:28:37.842485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.581 [2024-07-25 01:28:37.842504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.581 [2024-07-25 01:28:37.863492] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:15.581 [2024-07-25 01:28:37.864139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.581 [2024-07-25 01:28:37.864158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.581 [2024-07-25 01:28:37.885777] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:15.581 [2024-07-25 01:28:37.886411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.581 [2024-07-25 01:28:37.886430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.581 [2024-07-25 01:28:37.908348] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:15.581 [2024-07-25 01:28:37.909121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.581 [2024-07-25 01:28:37.909141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.581 [2024-07-25 01:28:37.932164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:15.581 [2024-07-25 01:28:37.932796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.581 [2024-07-25 01:28:37.932816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.581 [2024-07-25 01:28:37.955618] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:15.581 [2024-07-25 01:28:37.956313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.581 [2024-07-25 01:28:37.956332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.581 [2024-07-25 01:28:37.978599] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:15.581 [2024-07-25 01:28:37.979162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.581 [2024-07-25 01:28:37.979180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.581 [2024-07-25 01:28:38.007985] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:15.581 [2024-07-25 01:28:38.008930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.581 [2024-07-25 01:28:38.008949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.581 [2024-07-25 01:28:38.030659] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:15.581 [2024-07-25 01:28:38.031384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.581 [2024-07-25 01:28:38.031407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.581 [2024-07-25 01:28:38.060935] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:15.581 [2024-07-25 01:28:38.061497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.581 [2024-07-25 01:28:38.061516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.841 [2024-07-25 01:28:38.089886] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:15.841 [2024-07-25 01:28:38.090312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.841 [2024-07-25 01:28:38.090331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.841 [2024-07-25 01:28:38.112352] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:15.841 [2024-07-25 01:28:38.113310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.841 [2024-07-25 01:28:38.113328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.841 [2024-07-25 01:28:38.136053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:15.841 [2024-07-25 01:28:38.136758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.841 [2024-07-25 01:28:38.136777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.841 [2024-07-25 01:28:38.159413] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:15.841 [2024-07-25 01:28:38.159980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.841 [2024-07-25 01:28:38.159999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.841 [2024-07-25 01:28:38.180986] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:15.841 [2024-07-25 01:28:38.181685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.842 [2024-07-25 01:28:38.181704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.842 [2024-07-25 01:28:38.201770] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:15.842 [2024-07-25 01:28:38.202495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.842 [2024-07-25 01:28:38.202514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.842 [2024-07-25 01:28:38.231787] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:15.842 [2024-07-25 01:28:38.232435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.842 [2024-07-25 01:28:38.232460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.842 [2024-07-25 01:28:38.255397] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:15.842 [2024-07-25 01:28:38.256205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.842 [2024-07-25 01:28:38.256223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.842 [2024-07-25 01:28:38.276269] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:15.842 [2024-07-25 01:28:38.276895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.842 [2024-07-25 01:28:38.276913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.842 [2024-07-25 01:28:38.297867] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:15.842 [2024-07-25 01:28:38.298484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.842 [2024-07-25 01:28:38.298502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.842 [2024-07-25 01:28:38.319003] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:15.842 [2024-07-25 01:28:38.319721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.842 [2024-07-25 01:28:38.319740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.102 [2024-07-25 01:28:38.340366] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:16.102 [2024-07-25 01:28:38.341155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.102 [2024-07-25 01:28:38.341174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.102 [2024-07-25 01:28:38.362500] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:16.102 [2024-07-25 01:28:38.363217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.102 [2024-07-25 01:28:38.363236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.102 [2024-07-25 01:28:38.384390] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:16.102 [2024-07-25 01:28:38.385179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.102 [2024-07-25 01:28:38.385197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.102 [2024-07-25 01:28:38.404770] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:16.102 [2024-07-25 01:28:38.405585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.102 [2024-07-25 01:28:38.405604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.102 [2024-07-25 01:28:38.424029] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:16.102 [2024-07-25 01:28:38.424675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.102 [2024-07-25 01:28:38.424694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.102 [2024-07-25 01:28:38.445105] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:16.102 [2024-07-25 01:28:38.445637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.102 [2024-07-25 01:28:38.445656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.102 [2024-07-25 01:28:38.463725] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:16.102 [2024-07-25 01:28:38.464353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.102 [2024-07-25 01:28:38.464372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.102 [2024-07-25 01:28:38.483695] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:16.102 [2024-07-25 01:28:38.484571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.102 [2024-07-25 01:28:38.484590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.102 [2024-07-25 01:28:38.506534] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:16.102 [2024-07-25 01:28:38.507098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.102 [2024-07-25 01:28:38.507116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.102 [2024-07-25 01:28:38.528190] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:16.102 [2024-07-25 01:28:38.528724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.102 [2024-07-25 01:28:38.528743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.102 [2024-07-25 01:28:38.549142] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:16.102 [2024-07-25 01:28:38.549824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.102 [2024-07-25 01:28:38.549842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.102 [2024-07-25 01:28:38.571306] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:16.102 [2024-07-25 01:28:38.571604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.102 [2024-07-25 01:28:38.571622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.102 [2024-07-25 01:28:38.593407] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:16.102 [2024-07-25 01:28:38.594009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.103 [2024-07-25 01:28:38.594027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.363 [2024-07-25 01:28:38.613560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:16.363 [2024-07-25 01:28:38.614193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-25 01:28:38.614216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.363 [2024-07-25 01:28:38.635986] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:16.363 [2024-07-25 01:28:38.636705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-25 01:28:38.636724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.363 [2024-07-25 01:28:38.656928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:16.363 [2024-07-25 01:28:38.657470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-25 01:28:38.657489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.363 [2024-07-25 01:28:38.678056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:16.363 [2024-07-25 01:28:38.678533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-25 01:28:38.678552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.363 [2024-07-25 01:28:38.699194] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:16.363 [2024-07-25 01:28:38.699951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-25 01:28:38.699969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.363 [2024-07-25 01:28:38.720721] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:16.363 [2024-07-25 01:28:38.721497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-25 01:28:38.721516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.363 [2024-07-25 01:28:38.743348] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:16.363 [2024-07-25 01:28:38.743980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-25 01:28:38.743998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.363 [2024-07-25 01:28:38.766202] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:16.363 [2024-07-25 01:28:38.766810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-25 01:28:38.766829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.363 [2024-07-25 01:28:38.788549] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:16.363 [2024-07-25 01:28:38.789184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-25 01:28:38.789203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.363 [2024-07-25 01:28:38.809664] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:16.363 [2024-07-25 01:28:38.810391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-25 01:28:38.810410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.363 [2024-07-25 01:28:38.830781] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:16.363 [2024-07-25 01:28:38.831411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-25 01:28:38.831430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.363 [2024-07-25 01:28:38.850377] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:16.363 [2024-07-25 01:28:38.850849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.363 [2024-07-25 01:28:38.850868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.624 [2024-07-25 01:28:38.871945] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:16.624 [2024-07-25 01:28:38.872567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.624 [2024-07-25 01:28:38.872586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.624 [2024-07-25 01:28:38.893254] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:16.624 [2024-07-25 01:28:38.893879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.624 [2024-07-25 01:28:38.893898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.624 [2024-07-25 01:28:38.913897] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:16.624 [2024-07-25 01:28:38.914301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.624 [2024-07-25 01:28:38.914320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.624 [2024-07-25 01:28:38.932996] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:16.624 [2024-07-25 01:28:38.933625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.624 [2024-07-25 01:28:38.933644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.624 [2024-07-25 01:28:38.953395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:16.624 [2024-07-25 01:28:38.954119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.624 [2024-07-25 01:28:38.954137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.624 [2024-07-25 01:28:38.974240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:16.624 [2024-07-25 01:28:38.974725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.624 [2024-07-25 01:28:38.974744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.624 [2024-07-25 01:28:38.996979] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:16.624 [2024-07-25 01:28:38.997777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.624 [2024-07-25 01:28:38.997796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.624 [2024-07-25 01:28:39.019311] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:16.624 [2024-07-25 01:28:39.020248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.624 [2024-07-25 01:28:39.020266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.624 [2024-07-25 01:28:39.042829] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:16.624 [2024-07-25 01:28:39.043564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.624 [2024-07-25 01:28:39.043583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.624 [2024-07-25 01:28:39.064979] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:16.624 [2024-07-25 01:28:39.065802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.624 [2024-07-25 01:28:39.065821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.624 [2024-07-25 01:28:39.087943] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:16.624 [2024-07-25 01:28:39.088507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.624 [2024-07-25 01:28:39.088525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.624 [2024-07-25 01:28:39.110109] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:16.624 [2024-07-25 01:28:39.110935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.624 [2024-07-25 01:28:39.110953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.885 [2024-07-25 01:28:39.131092] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:16.885 [2024-07-25 01:28:39.131661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.885 [2024-07-25 01:28:39.131680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.885 [2024-07-25 01:28:39.151307] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:16.885 [2024-07-25 01:28:39.151934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.885 [2024-07-25 01:28:39.151953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.885 [2024-07-25 01:28:39.172757] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:16.885 [2024-07-25 01:28:39.173397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.885 [2024-07-25 01:28:39.173420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.885 [2024-07-25 01:28:39.193855] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:16.885 [2024-07-25 01:28:39.194365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.885 [2024-07-25 01:28:39.194384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.885 [2024-07-25 01:28:39.217779] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:16.885 [2024-07-25 01:28:39.218192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.885 [2024-07-25 01:28:39.218211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.885 [2024-07-25 01:28:39.242000] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:16.885 [2024-07-25 01:28:39.242649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.885 [2024-07-25 01:28:39.242668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.885 [2024-07-25 01:28:39.263914] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:16.885 [2024-07-25 01:28:39.264413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.885 [2024-07-25 01:28:39.264432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.885 [2024-07-25 01:28:39.285539] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17a0410) with pdu=0x2000190fef90 00:28:16.885 [2024-07-25 01:28:39.286541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.885 [2024-07-25 01:28:39.286560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.885 00:28:16.885 Latency(us) 00:28:16.885 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:16.885 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:16.885 nvme0n1 : 2.01 1400.32 175.04 0.00 0.00 11392.25 7864.32 36700.16 00:28:16.885 =================================================================================================================== 00:28:16.885 Total : 1400.32 175.04 0.00 0.00 11392.25 7864.32 36700.16 00:28:16.885 0 00:28:16.885 01:28:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:16.885 01:28:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:16.885 01:28:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:16.885 | .driver_specific 00:28:16.885 | .nvme_error 00:28:16.885 | .status_code 00:28:16.885 | .command_transient_transport_error' 00:28:16.885 01:28:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:17.146 01:28:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 90 > 0 )) 00:28:17.146 01:28:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1047756 00:28:17.146 01:28:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1047756 ']' 00:28:17.146 01:28:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1047756 00:28:17.146 01:28:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:17.146 01:28:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:17.146 01:28:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1047756 00:28:17.146 01:28:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:17.146 01:28:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:17.146 01:28:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1047756' 00:28:17.146 killing process with pid 1047756 00:28:17.146 01:28:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1047756 00:28:17.146 Received shutdown signal, test time was about 2.000000 seconds 00:28:17.146 00:28:17.146 Latency(us) 00:28:17.146 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:17.146 =================================================================================================================== 00:28:17.146 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:17.146 01:28:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1047756 00:28:17.407 01:28:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1045633 00:28:17.407 01:28:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1045633 ']' 00:28:17.407 01:28:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1045633 00:28:17.407 01:28:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:17.407 01:28:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:17.407 01:28:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1045633 00:28:17.407 01:28:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:17.407 01:28:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:17.407 01:28:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1045633' 00:28:17.407 killing process with pid 1045633 00:28:17.407 01:28:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1045633 00:28:17.407 01:28:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1045633 00:28:17.667 00:28:17.667 real 0m17.080s 00:28:17.667 user 0m33.835s 00:28:17.667 sys 0m3.419s 00:28:17.667 01:28:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:17.667 01:28:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:17.667 ************************************ 00:28:17.667 END TEST nvmf_digest_error 00:28:17.667 ************************************ 00:28:17.667 01:28:39 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:28:17.667 01:28:39 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:17.667 01:28:39 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:17.667 01:28:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:17.667 01:28:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:28:17.667 01:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:17.667 01:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:28:17.667 01:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:17.667 01:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:17.667 rmmod nvme_tcp 00:28:17.667 rmmod nvme_fabrics 00:28:17.667 rmmod nvme_keyring 00:28:17.667 01:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:17.667 01:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:28:17.667 01:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:28:17.667 01:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1045633 ']' 00:28:17.667 01:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1045633 00:28:17.667 01:28:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 1045633 ']' 00:28:17.667 01:28:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 1045633 00:28:17.667 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1045633) - No such process 00:28:17.667 01:28:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 1045633 is not found' 00:28:17.667 Process with pid 1045633 is not found 00:28:17.668 01:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:17.668 01:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:17.668 01:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:17.668 01:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:17.668 01:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:17.668 01:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:17.668 01:28:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:17.668 01:28:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:20.211 01:28:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:20.211 00:28:20.211 real 0m41.836s 00:28:20.211 user 1m9.123s 00:28:20.211 sys 0m10.952s 00:28:20.211 01:28:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:20.211 01:28:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:20.211 ************************************ 00:28:20.211 END TEST nvmf_digest 00:28:20.211 ************************************ 00:28:20.211 01:28:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:20.211 01:28:42 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:28:20.211 01:28:42 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:28:20.211 01:28:42 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:28:20.211 01:28:42 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:20.211 01:28:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:20.211 01:28:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:20.211 01:28:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:20.211 ************************************ 00:28:20.211 START TEST nvmf_bdevperf 00:28:20.211 ************************************ 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:20.211 * Looking for test storage... 00:28:20.211 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:20.211 01:28:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:25.495 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:25.495 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:25.495 Found net devices under 0000:86:00.0: cvl_0_0 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:25.495 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:25.496 Found net devices under 0000:86:00.1: cvl_0_1 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:25.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:25.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:28:25.496 00:28:25.496 --- 10.0.0.2 ping statistics --- 00:28:25.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:25.496 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:25.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:25.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:28:25.496 00:28:25.496 --- 10.0.0.1 ping statistics --- 00:28:25.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:25.496 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1051929 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1051929 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1051929 ']' 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:25.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:25.496 01:28:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:25.496 [2024-07-25 01:28:47.874809] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:28:25.496 [2024-07-25 01:28:47.874855] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:25.496 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.496 [2024-07-25 01:28:47.932852] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:25.756 [2024-07-25 01:28:48.015342] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:25.756 [2024-07-25 01:28:48.015378] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:25.756 [2024-07-25 01:28:48.015386] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:25.756 [2024-07-25 01:28:48.015392] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:25.756 [2024-07-25 01:28:48.015397] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:25.756 [2024-07-25 01:28:48.015499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:25.756 [2024-07-25 01:28:48.015584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:25.756 [2024-07-25 01:28:48.015585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:26.328 01:28:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:26.328 01:28:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:28:26.328 01:28:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:26.328 01:28:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:26.328 01:28:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:26.328 01:28:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:26.328 01:28:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:26.328 01:28:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.328 01:28:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:26.328 [2024-07-25 01:28:48.732199] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:26.328 01:28:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.328 01:28:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:26.328 01:28:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.328 01:28:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:26.328 Malloc0 00:28:26.328 01:28:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.328 01:28:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:26.328 01:28:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.328 01:28:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:26.328 01:28:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.328 01:28:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:26.328 01:28:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.328 01:28:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:26.328 01:28:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.328 01:28:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:26.328 01:28:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.328 01:28:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:26.328 [2024-07-25 01:28:48.793968] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:26.328 01:28:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.328 01:28:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:26.328 01:28:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:26.328 01:28:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:28:26.328 01:28:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:28:26.328 01:28:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:26.328 01:28:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:26.328 { 00:28:26.328 "params": { 00:28:26.328 "name": "Nvme$subsystem", 00:28:26.328 "trtype": "$TEST_TRANSPORT", 00:28:26.328 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.328 "adrfam": "ipv4", 00:28:26.328 "trsvcid": "$NVMF_PORT", 00:28:26.328 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.328 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.328 "hdgst": ${hdgst:-false}, 00:28:26.328 "ddgst": ${ddgst:-false} 00:28:26.328 }, 00:28:26.328 "method": "bdev_nvme_attach_controller" 00:28:26.328 } 00:28:26.328 EOF 00:28:26.328 )") 00:28:26.328 01:28:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:28:26.328 01:28:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:28:26.328 01:28:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:28:26.328 01:28:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:26.328 "params": { 00:28:26.328 "name": "Nvme1", 00:28:26.328 "trtype": "tcp", 00:28:26.328 "traddr": "10.0.0.2", 00:28:26.328 "adrfam": "ipv4", 00:28:26.328 "trsvcid": "4420", 00:28:26.328 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:26.328 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:26.328 "hdgst": false, 00:28:26.328 "ddgst": false 00:28:26.328 }, 00:28:26.328 "method": "bdev_nvme_attach_controller" 00:28:26.328 }' 00:28:26.589 [2024-07-25 01:28:48.841851] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:28:26.589 [2024-07-25 01:28:48.841893] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1052012 ] 00:28:26.589 EAL: No free 2048 kB hugepages reported on node 1 00:28:26.589 [2024-07-25 01:28:48.895201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:26.589 [2024-07-25 01:28:48.969264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:26.848 Running I/O for 1 seconds... 00:28:27.789 00:28:27.789 Latency(us) 00:28:27.789 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:27.789 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:27.789 Verification LBA range: start 0x0 length 0x4000 00:28:27.789 Nvme1n1 : 1.01 10978.19 42.88 0.00 0.00 11592.00 2293.76 30545.47 00:28:27.789 =================================================================================================================== 00:28:27.789 Total : 10978.19 42.88 0.00 0.00 11592.00 2293.76 30545.47 00:28:28.050 01:28:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1052247 00:28:28.050 01:28:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:28.050 01:28:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:28.050 01:28:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:28.050 01:28:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:28:28.050 01:28:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:28:28.050 01:28:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:28.050 01:28:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:28.050 { 00:28:28.050 "params": { 00:28:28.050 "name": "Nvme$subsystem", 00:28:28.050 "trtype": "$TEST_TRANSPORT", 00:28:28.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:28.050 "adrfam": "ipv4", 00:28:28.050 "trsvcid": "$NVMF_PORT", 00:28:28.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:28.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:28.050 "hdgst": ${hdgst:-false}, 00:28:28.050 "ddgst": ${ddgst:-false} 00:28:28.050 }, 00:28:28.050 "method": "bdev_nvme_attach_controller" 00:28:28.050 } 00:28:28.050 EOF 00:28:28.050 )") 00:28:28.050 01:28:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:28:28.050 01:28:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:28:28.050 01:28:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:28:28.050 01:28:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:28.050 "params": { 00:28:28.050 "name": "Nvme1", 00:28:28.050 "trtype": "tcp", 00:28:28.050 "traddr": "10.0.0.2", 00:28:28.050 "adrfam": "ipv4", 00:28:28.050 "trsvcid": "4420", 00:28:28.050 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:28.050 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:28.050 "hdgst": false, 00:28:28.050 "ddgst": false 00:28:28.050 }, 00:28:28.050 "method": "bdev_nvme_attach_controller" 00:28:28.050 }' 00:28:28.050 [2024-07-25 01:28:50.405991] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:28:28.050 [2024-07-25 01:28:50.406038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1052247 ] 00:28:28.050 EAL: No free 2048 kB hugepages reported on node 1 00:28:28.050 [2024-07-25 01:28:50.461720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.050 [2024-07-25 01:28:50.532457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.310 Running I/O for 15 seconds... 00:28:30.879 01:28:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1051929 00:28:30.879 01:28:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:31.142 [2024-07-25 01:28:53.376855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.142 [2024-07-25 01:28:53.376893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.142 [2024-07-25 01:28:53.376910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.142 [2024-07-25 01:28:53.376920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.142 [2024-07-25 01:28:53.376929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.142 [2024-07-25 01:28:53.376937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.142 [2024-07-25 01:28:53.376946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.142 [2024-07-25 01:28:53.376953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.142 [2024-07-25 01:28:53.376962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.142 [2024-07-25 01:28:53.376969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.142 [2024-07-25 01:28:53.376977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.142 [2024-07-25 01:28:53.376983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.142 [2024-07-25 01:28:53.376992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.142 [2024-07-25 01:28:53.376999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.142 [2024-07-25 01:28:53.377007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.142 [2024-07-25 01:28:53.377014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.142 [2024-07-25 01:28:53.377023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.142 [2024-07-25 01:28:53.377031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.142 [2024-07-25 01:28:53.377106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.142 [2024-07-25 01:28:53.377115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.142 [2024-07-25 01:28:53.377124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.142 [2024-07-25 01:28:53.377133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.142 [2024-07-25 01:28:53.377143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.142 [2024-07-25 01:28:53.377151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.142 [2024-07-25 01:28:53.377162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.142 [2024-07-25 01:28:53.377169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.142 [2024-07-25 01:28:53.377179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.142 [2024-07-25 01:28:53.377188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.142 [2024-07-25 01:28:53.377197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.142 [2024-07-25 01:28:53.377206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.142 [2024-07-25 01:28:53.377215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.142 [2024-07-25 01:28:53.377222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.142 [2024-07-25 01:28:53.377230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.142 [2024-07-25 01:28:53.377238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.142 [2024-07-25 01:28:53.377248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.142 [2024-07-25 01:28:53.377255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.142 [2024-07-25 01:28:53.377263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.142 [2024-07-25 01:28:53.377270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.142 [2024-07-25 01:28:53.377277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.142 [2024-07-25 01:28:53.377284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.142 [2024-07-25 01:28:53.377292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.142 [2024-07-25 01:28:53.377298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.142 [2024-07-25 01:28:53.377306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.142 [2024-07-25 01:28:53.377313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.142 [2024-07-25 01:28:53.377323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.142 [2024-07-25 01:28:53.377329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.142 [2024-07-25 01:28:53.377337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.142 [2024-07-25 01:28:53.377344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.142 [2024-07-25 01:28:53.377352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.142 [2024-07-25 01:28:53.377359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.142 [2024-07-25 01:28:53.377366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.142 [2024-07-25 01:28:53.377373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.142 [2024-07-25 01:28:53.377381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.142 [2024-07-25 01:28:53.377388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.142 [2024-07-25 01:28:53.377396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.142 [2024-07-25 01:28:53.377402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.142 [2024-07-25 01:28:53.377410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.142 [2024-07-25 01:28:53.377416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.142 [2024-07-25 01:28:53.377424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.143 [2024-07-25 01:28:53.377430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.143 [2024-07-25 01:28:53.377438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.143 [2024-07-25 01:28:53.377444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.143 [2024-07-25 01:28:53.377452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.143 [2024-07-25 01:28:53.377458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.143 [2024-07-25 01:28:53.377466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.143 [2024-07-25 01:28:53.377473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.143 [2024-07-25 01:28:53.377481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.143 [2024-07-25 01:28:53.377488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.143 [2024-07-25 01:28:53.377496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.143 [2024-07-25 01:28:53.377504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.143 [2024-07-25 01:28:53.377512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.143 [2024-07-25 01:28:53.377519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.143 [2024-07-25 01:28:53.377527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.143 [2024-07-25 01:28:53.377533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.143 [2024-07-25 01:28:53.377541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.143 [2024-07-25 01:28:53.377547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.143 [2024-07-25 01:28:53.377556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.143 [2024-07-25 01:28:53.377562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.143 [2024-07-25 01:28:53.377570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.143 [2024-07-25 01:28:53.377577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.143 [2024-07-25 01:28:53.377585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.143 [2024-07-25 01:28:53.377591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.143 [2024-07-25 01:28:53.377599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.143 [2024-07-25 01:28:53.377605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.143 [2024-07-25 01:28:53.377613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.143 [2024-07-25 01:28:53.377620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.143 [2024-07-25 01:28:53.377627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.143 [2024-07-25 01:28:53.377634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.143 [2024-07-25 01:28:53.377642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.143 [2024-07-25 01:28:53.377648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.143 [2024-07-25 01:28:53.377656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.143 [2024-07-25 01:28:53.377662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.143 [2024-07-25 01:28:53.377670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.143 [2024-07-25 01:28:53.377676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.143 [2024-07-25 01:28:53.377685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.143 [2024-07-25 01:28:53.377692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.143 [2024-07-25 01:28:53.377700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.143 [2024-07-25 01:28:53.377706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.143 [2024-07-25 01:28:53.377714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.143 [2024-07-25 01:28:53.377721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.143 [2024-07-25 01:28:53.377728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.143 [2024-07-25 01:28:53.377735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.143 [2024-07-25 01:28:53.377742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.143 [2024-07-25 01:28:53.377749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.143 [2024-07-25 01:28:53.377757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.143 [2024-07-25 01:28:53.377763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.143 [2024-07-25 01:28:53.377771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.143 [2024-07-25 01:28:53.377778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.143 [2024-07-25 01:28:53.377786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.143 [2024-07-25 01:28:53.377792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.143 [2024-07-25 01:28:53.377800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.143 [2024-07-25 01:28:53.377806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.143 [2024-07-25 01:28:53.377814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.143 [2024-07-25 01:28:53.377821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.143 [2024-07-25 01:28:53.377828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.143 [2024-07-25 01:28:53.377835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.143 [2024-07-25 01:28:53.377843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.143 [2024-07-25 01:28:53.377849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.143 [2024-07-25 01:28:53.377857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.143 [2024-07-25 01:28:53.377865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.143 [2024-07-25 01:28:53.377873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.143 [2024-07-25 01:28:53.377880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.143 [2024-07-25 01:28:53.377888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.143 [2024-07-25 01:28:53.377894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.143 [2024-07-25 01:28:53.377901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.143 [2024-07-25 01:28:53.377908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.143 [2024-07-25 01:28:53.377915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.143 [2024-07-25 01:28:53.377922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.143 [2024-07-25 01:28:53.377931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.143 [2024-07-25 01:28:53.377937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.143 [2024-07-25 01:28:53.377945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.143 [2024-07-25 01:28:53.377951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.143 [2024-07-25 01:28:53.377959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.143 [2024-07-25 01:28:53.377966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.143 [2024-07-25 01:28:53.377973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.143 [2024-07-25 01:28:53.377980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.143 [2024-07-25 01:28:53.377988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.143 [2024-07-25 01:28:53.377995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.143 [2024-07-25 01:28:53.378002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.144 [2024-07-25 01:28:53.378009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.144 [2024-07-25 01:28:53.378016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.144 [2024-07-25 01:28:53.378023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.144 [2024-07-25 01:28:53.378031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.144 [2024-07-25 01:28:53.378038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.144 [2024-07-25 01:28:53.378052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.144 [2024-07-25 01:28:53.378060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.144 [2024-07-25 01:28:53.378069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.144 [2024-07-25 01:28:53.378076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.144 [2024-07-25 01:28:53.378084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.144 [2024-07-25 01:28:53.378091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.144 [2024-07-25 01:28:53.378099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.144 [2024-07-25 01:28:53.378106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.144 [2024-07-25 01:28:53.378114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.144 [2024-07-25 01:28:53.378121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.144 [2024-07-25 01:28:53.378128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.144 [2024-07-25 01:28:53.378135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.144 [2024-07-25 01:28:53.378143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.144 [2024-07-25 01:28:53.378150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.144 [2024-07-25 01:28:53.378158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.144 [2024-07-25 01:28:53.378164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.144 [2024-07-25 01:28:53.378172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.144 [2024-07-25 01:28:53.378179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.144 [2024-07-25 01:28:53.378188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.144 [2024-07-25 01:28:53.378195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.144 [2024-07-25 01:28:53.378203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.144 [2024-07-25 01:28:53.378210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.144 [2024-07-25 01:28:53.378218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.144 [2024-07-25 01:28:53.378225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.144 [2024-07-25 01:28:53.378233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.144 [2024-07-25 01:28:53.378239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.144 [2024-07-25 01:28:53.378248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.144 [2024-07-25 01:28:53.378255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.144 [2024-07-25 01:28:53.378264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.144 [2024-07-25 01:28:53.378270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.144 [2024-07-25 01:28:53.378278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.144 [2024-07-25 01:28:53.378285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.144 [2024-07-25 01:28:53.378293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.144 [2024-07-25 01:28:53.378300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.144 [2024-07-25 01:28:53.378308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.144 [2024-07-25 01:28:53.378314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.144 [2024-07-25 01:28:53.378322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.144 [2024-07-25 01:28:53.378329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.144 [2024-07-25 01:28:53.378337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.144 [2024-07-25 01:28:53.378343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.144 [2024-07-25 01:28:53.378352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.144 [2024-07-25 01:28:53.378359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.144 [2024-07-25 01:28:53.378367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.144 [2024-07-25 01:28:53.378374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.144 [2024-07-25 01:28:53.378381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.144 [2024-07-25 01:28:53.378388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.144 [2024-07-25 01:28:53.378396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.144 [2024-07-25 01:28:53.378403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.144 [2024-07-25 01:28:53.378411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.144 [2024-07-25 01:28:53.378417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.144 [2024-07-25 01:28:53.378428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.144 [2024-07-25 01:28:53.378437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.144 [2024-07-25 01:28:53.378445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.144 [2024-07-25 01:28:53.378452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.144 [2024-07-25 01:28:53.378460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.144 [2024-07-25 01:28:53.378466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.144 [2024-07-25 01:28:53.378474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.144 [2024-07-25 01:28:53.378480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.144 [2024-07-25 01:28:53.378489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.144 [2024-07-25 01:28:53.378495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.144 [2024-07-25 01:28:53.378503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.144 [2024-07-25 01:28:53.378510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.144 [2024-07-25 01:28:53.378518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.144 [2024-07-25 01:28:53.378524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.144 [2024-07-25 01:28:53.378532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.144 [2024-07-25 01:28:53.378539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.144 [2024-07-25 01:28:53.378549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.144 [2024-07-25 01:28:53.378555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.144 [2024-07-25 01:28:53.378563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.144 [2024-07-25 01:28:53.378570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.144 [2024-07-25 01:28:53.378578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.144 [2024-07-25 01:28:53.378584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.144 [2024-07-25 01:28:53.378592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.144 [2024-07-25 01:28:53.378598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.145 [2024-07-25 01:28:53.378606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.145 [2024-07-25 01:28:53.378613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.145 [2024-07-25 01:28:53.378621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.145 [2024-07-25 01:28:53.378631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.145 [2024-07-25 01:28:53.378639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.145 [2024-07-25 01:28:53.378645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.145 [2024-07-25 01:28:53.378653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.145 [2024-07-25 01:28:53.378660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.145 [2024-07-25 01:28:53.378669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.145 [2024-07-25 01:28:53.378675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.145 [2024-07-25 01:28:53.378683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.145 [2024-07-25 01:28:53.378690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.145 [2024-07-25 01:28:53.378697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.145 [2024-07-25 01:28:53.378704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.145 [2024-07-25 01:28:53.378712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.145 [2024-07-25 01:28:53.378718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.145 [2024-07-25 01:28:53.378726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.145 [2024-07-25 01:28:53.378733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.145 [2024-07-25 01:28:53.378740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.145 [2024-07-25 01:28:53.378747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.145 [2024-07-25 01:28:53.378754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.145 [2024-07-25 01:28:53.378761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.145 [2024-07-25 01:28:53.378769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.145 [2024-07-25 01:28:53.378775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.145 [2024-07-25 01:28:53.378785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.145 [2024-07-25 01:28:53.378791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.145 [2024-07-25 01:28:53.378799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.145 [2024-07-25 01:28:53.378805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.145 [2024-07-25 01:28:53.378815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.145 [2024-07-25 01:28:53.378821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.145 [2024-07-25 01:28:53.378830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.145 [2024-07-25 01:28:53.378836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.145 [2024-07-25 01:28:53.378844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.145 [2024-07-25 01:28:53.378850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.145 [2024-07-25 01:28:53.378858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.145 [2024-07-25 01:28:53.378864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.145 [2024-07-25 01:28:53.378871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f68930 is same with the state(5) to be set 00:28:31.145 [2024-07-25 01:28:53.378879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:31.145 [2024-07-25 01:28:53.378884] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:31.145 [2024-07-25 01:28:53.378890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96528 len:8 PRP1 0x0 PRP2 0x0 00:28:31.145 [2024-07-25 01:28:53.378898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.145 [2024-07-25 01:28:53.378940] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f68930 was disconnected and freed. reset controller. 00:28:31.145 [2024-07-25 01:28:53.381886] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.145 [2024-07-25 01:28:53.381940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.145 [2024-07-25 01:28:53.382732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.145 [2024-07-25 01:28:53.382749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.145 [2024-07-25 01:28:53.382756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.145 [2024-07-25 01:28:53.382933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.145 [2024-07-25 01:28:53.383115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.145 [2024-07-25 01:28:53.383124] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.145 [2024-07-25 01:28:53.383131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.145 [2024-07-25 01:28:53.385964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.145 [2024-07-25 01:28:53.395134] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.145 [2024-07-25 01:28:53.395829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.145 [2024-07-25 01:28:53.395874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.145 [2024-07-25 01:28:53.395896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.145 [2024-07-25 01:28:53.396418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.145 [2024-07-25 01:28:53.396591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.145 [2024-07-25 01:28:53.396599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.145 [2024-07-25 01:28:53.396605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.145 [2024-07-25 01:28:53.399414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.145 [2024-07-25 01:28:53.408019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.145 [2024-07-25 01:28:53.408582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.145 [2024-07-25 01:28:53.408625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.145 [2024-07-25 01:28:53.408646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.145 [2024-07-25 01:28:53.409217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.145 [2024-07-25 01:28:53.409391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.145 [2024-07-25 01:28:53.409399] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.145 [2024-07-25 01:28:53.409405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.145 [2024-07-25 01:28:53.412117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.145 [2024-07-25 01:28:53.421068] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.145 [2024-07-25 01:28:53.421788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.145 [2024-07-25 01:28:53.421832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.145 [2024-07-25 01:28:53.421854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.145 [2024-07-25 01:28:53.422267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.145 [2024-07-25 01:28:53.422439] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.145 [2024-07-25 01:28:53.422447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.145 [2024-07-25 01:28:53.422453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.145 [2024-07-25 01:28:53.425146] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.145 [2024-07-25 01:28:53.434020] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.145 [2024-07-25 01:28:53.434664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.145 [2024-07-25 01:28:53.434707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.145 [2024-07-25 01:28:53.434729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.145 [2024-07-25 01:28:53.435323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.145 [2024-07-25 01:28:53.435621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.146 [2024-07-25 01:28:53.435630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.146 [2024-07-25 01:28:53.435639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.146 [2024-07-25 01:28:53.438356] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.146 [2024-07-25 01:28:53.446887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.146 [2024-07-25 01:28:53.447596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.146 [2024-07-25 01:28:53.447639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.146 [2024-07-25 01:28:53.447660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.146 [2024-07-25 01:28:53.448254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.146 [2024-07-25 01:28:53.448794] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.146 [2024-07-25 01:28:53.448802] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.146 [2024-07-25 01:28:53.448808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.146 [2024-07-25 01:28:53.451557] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.146 [2024-07-25 01:28:53.459826] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.146 [2024-07-25 01:28:53.460388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.146 [2024-07-25 01:28:53.460404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.146 [2024-07-25 01:28:53.460411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.146 [2024-07-25 01:28:53.460581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.146 [2024-07-25 01:28:53.460753] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.146 [2024-07-25 01:28:53.460761] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.146 [2024-07-25 01:28:53.460767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.146 [2024-07-25 01:28:53.463455] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.146 [2024-07-25 01:28:53.472721] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.146 [2024-07-25 01:28:53.473331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.146 [2024-07-25 01:28:53.473378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.146 [2024-07-25 01:28:53.473401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.146 [2024-07-25 01:28:53.473981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.146 [2024-07-25 01:28:53.474510] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.146 [2024-07-25 01:28:53.474519] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.146 [2024-07-25 01:28:53.474524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.146 [2024-07-25 01:28:53.477255] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.146 [2024-07-25 01:28:53.485602] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.146 [2024-07-25 01:28:53.486166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.146 [2024-07-25 01:28:53.486185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.146 [2024-07-25 01:28:53.486192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.146 [2024-07-25 01:28:53.486363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.146 [2024-07-25 01:28:53.486534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.146 [2024-07-25 01:28:53.486542] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.146 [2024-07-25 01:28:53.486548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.146 [2024-07-25 01:28:53.489233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.146 [2024-07-25 01:28:53.498561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.146 [2024-07-25 01:28:53.499212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.146 [2024-07-25 01:28:53.499256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.146 [2024-07-25 01:28:53.499277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.146 [2024-07-25 01:28:53.499649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.146 [2024-07-25 01:28:53.499822] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.146 [2024-07-25 01:28:53.499830] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.146 [2024-07-25 01:28:53.499835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.146 [2024-07-25 01:28:53.502518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.146 [2024-07-25 01:28:53.511455] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.146 [2024-07-25 01:28:53.512082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.146 [2024-07-25 01:28:53.512126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.146 [2024-07-25 01:28:53.512148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.146 [2024-07-25 01:28:53.512610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.146 [2024-07-25 01:28:53.512773] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.146 [2024-07-25 01:28:53.512780] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.146 [2024-07-25 01:28:53.512786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.146 [2024-07-25 01:28:53.515530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.146 [2024-07-25 01:28:53.524417] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.146 [2024-07-25 01:28:53.525287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.146 [2024-07-25 01:28:53.525331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.146 [2024-07-25 01:28:53.525355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.146 [2024-07-25 01:28:53.525606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.146 [2024-07-25 01:28:53.525771] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.146 [2024-07-25 01:28:53.525779] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.146 [2024-07-25 01:28:53.525784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.146 [2024-07-25 01:28:53.528485] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.146 [2024-07-25 01:28:53.537433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.146 [2024-07-25 01:28:53.538079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.146 [2024-07-25 01:28:53.538123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.146 [2024-07-25 01:28:53.538145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.146 [2024-07-25 01:28:53.538543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.146 [2024-07-25 01:28:53.538706] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.146 [2024-07-25 01:28:53.538714] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.146 [2024-07-25 01:28:53.538719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.147 [2024-07-25 01:28:53.541419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.147 [2024-07-25 01:28:53.550318] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.147 [2024-07-25 01:28:53.550968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.147 [2024-07-25 01:28:53.551011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.147 [2024-07-25 01:28:53.551032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.147 [2024-07-25 01:28:53.551468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.147 [2024-07-25 01:28:53.551641] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.147 [2024-07-25 01:28:53.551649] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.147 [2024-07-25 01:28:53.551654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.147 [2024-07-25 01:28:53.554342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.147 [2024-07-25 01:28:53.563221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.147 [2024-07-25 01:28:53.563786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.147 [2024-07-25 01:28:53.563827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.147 [2024-07-25 01:28:53.563849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.147 [2024-07-25 01:28:53.564440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.147 [2024-07-25 01:28:53.564915] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.147 [2024-07-25 01:28:53.564923] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.147 [2024-07-25 01:28:53.564929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.147 [2024-07-25 01:28:53.567693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.147 [2024-07-25 01:28:53.576162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.147 [2024-07-25 01:28:53.576808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.147 [2024-07-25 01:28:53.576851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.147 [2024-07-25 01:28:53.576872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.147 [2024-07-25 01:28:53.577463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.147 [2024-07-25 01:28:53.577898] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.147 [2024-07-25 01:28:53.577906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.147 [2024-07-25 01:28:53.577911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.147 [2024-07-25 01:28:53.580655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.147 [2024-07-25 01:28:53.589111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.147 [2024-07-25 01:28:53.589720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.147 [2024-07-25 01:28:53.589763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.147 [2024-07-25 01:28:53.589785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.147 [2024-07-25 01:28:53.590122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.147 [2024-07-25 01:28:53.590296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.147 [2024-07-25 01:28:53.590304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.147 [2024-07-25 01:28:53.590310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.147 [2024-07-25 01:28:53.593064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.147 [2024-07-25 01:28:53.602062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.147 [2024-07-25 01:28:53.602685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.147 [2024-07-25 01:28:53.602728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.147 [2024-07-25 01:28:53.602749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.147 [2024-07-25 01:28:53.603341] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.147 [2024-07-25 01:28:53.603788] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.147 [2024-07-25 01:28:53.603796] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.147 [2024-07-25 01:28:53.603802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.147 [2024-07-25 01:28:53.606483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.147 [2024-07-25 01:28:53.615108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.147 [2024-07-25 01:28:53.615766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.147 [2024-07-25 01:28:53.615808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.147 [2024-07-25 01:28:53.615837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.147 [2024-07-25 01:28:53.616429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.147 [2024-07-25 01:28:53.616894] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.147 [2024-07-25 01:28:53.616902] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.147 [2024-07-25 01:28:53.616910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.147 [2024-07-25 01:28:53.619652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.147 [2024-07-25 01:28:53.628252] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.147 [2024-07-25 01:28:53.628939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.147 [2024-07-25 01:28:53.628980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.147 [2024-07-25 01:28:53.629002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.147 [2024-07-25 01:28:53.629587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.147 [2024-07-25 01:28:53.629765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.147 [2024-07-25 01:28:53.629773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.147 [2024-07-25 01:28:53.629780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.407 [2024-07-25 01:28:53.632614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.407 [2024-07-25 01:28:53.641358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.407 [2024-07-25 01:28:53.642019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.407 [2024-07-25 01:28:53.642076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.407 [2024-07-25 01:28:53.642100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.407 [2024-07-25 01:28:53.642621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.407 [2024-07-25 01:28:53.642874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.407 [2024-07-25 01:28:53.642885] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.407 [2024-07-25 01:28:53.642894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.407 [2024-07-25 01:28:53.646960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.407 [2024-07-25 01:28:53.654839] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.407 [2024-07-25 01:28:53.655477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.407 [2024-07-25 01:28:53.655520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.407 [2024-07-25 01:28:53.655541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.407 [2024-07-25 01:28:53.655935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.407 [2024-07-25 01:28:53.656112] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.407 [2024-07-25 01:28:53.656124] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.407 [2024-07-25 01:28:53.656130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.407 [2024-07-25 01:28:53.658876] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.407 [2024-07-25 01:28:53.667894] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.407 [2024-07-25 01:28:53.668573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.407 [2024-07-25 01:28:53.668615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.407 [2024-07-25 01:28:53.668636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.407 [2024-07-25 01:28:53.669174] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.407 [2024-07-25 01:28:53.669347] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.407 [2024-07-25 01:28:53.669355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.407 [2024-07-25 01:28:53.669361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.407 [2024-07-25 01:28:53.672041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.407 [2024-07-25 01:28:53.680786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.407 [2024-07-25 01:28:53.681411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.407 [2024-07-25 01:28:53.681428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.407 [2024-07-25 01:28:53.681435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.407 [2024-07-25 01:28:53.681597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.407 [2024-07-25 01:28:53.681759] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.407 [2024-07-25 01:28:53.681766] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.407 [2024-07-25 01:28:53.681771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.407 [2024-07-25 01:28:53.684459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.407 [2024-07-25 01:28:53.693669] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.407 [2024-07-25 01:28:53.694357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.407 [2024-07-25 01:28:53.694400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.407 [2024-07-25 01:28:53.694421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.407 [2024-07-25 01:28:53.694999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.407 [2024-07-25 01:28:53.695276] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.407 [2024-07-25 01:28:53.695284] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.407 [2024-07-25 01:28:53.695290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.407 [2024-07-25 01:28:53.697972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.407 [2024-07-25 01:28:53.706570] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.407 [2024-07-25 01:28:53.707280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.407 [2024-07-25 01:28:53.707324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.407 [2024-07-25 01:28:53.707345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.407 [2024-07-25 01:28:53.707924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.407 [2024-07-25 01:28:53.708250] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.408 [2024-07-25 01:28:53.708258] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.408 [2024-07-25 01:28:53.708265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.408 [2024-07-25 01:28:53.711022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.408 [2024-07-25 01:28:53.719451] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.408 [2024-07-25 01:28:53.720174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.408 [2024-07-25 01:28:53.720217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.408 [2024-07-25 01:28:53.720239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.408 [2024-07-25 01:28:53.720619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.408 [2024-07-25 01:28:53.720792] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.408 [2024-07-25 01:28:53.720800] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.408 [2024-07-25 01:28:53.720806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.408 [2024-07-25 01:28:53.723491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.408 [2024-07-25 01:28:53.732297] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.408 [2024-07-25 01:28:53.732928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.408 [2024-07-25 01:28:53.732970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.408 [2024-07-25 01:28:53.732992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.408 [2024-07-25 01:28:53.733369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.408 [2024-07-25 01:28:53.733542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.408 [2024-07-25 01:28:53.733549] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.408 [2024-07-25 01:28:53.733556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.408 [2024-07-25 01:28:53.736239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.408 [2024-07-25 01:28:53.745112] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.408 [2024-07-25 01:28:53.745808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.408 [2024-07-25 01:28:53.745851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.408 [2024-07-25 01:28:53.745872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.408 [2024-07-25 01:28:53.746240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.408 [2024-07-25 01:28:53.746414] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.408 [2024-07-25 01:28:53.746421] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.408 [2024-07-25 01:28:53.746427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.408 [2024-07-25 01:28:53.749162] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.408 [2024-07-25 01:28:53.757946] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.408 [2024-07-25 01:28:53.758651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.408 [2024-07-25 01:28:53.758668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.408 [2024-07-25 01:28:53.758675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.408 [2024-07-25 01:28:53.758846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.408 [2024-07-25 01:28:53.759018] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.408 [2024-07-25 01:28:53.759025] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.408 [2024-07-25 01:28:53.759031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.408 [2024-07-25 01:28:53.761767] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.408 [2024-07-25 01:28:53.770777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.408 [2024-07-25 01:28:53.771466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.408 [2024-07-25 01:28:53.771509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.408 [2024-07-25 01:28:53.771530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.408 [2024-07-25 01:28:53.772124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.408 [2024-07-25 01:28:53.772432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.408 [2024-07-25 01:28:53.772441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.408 [2024-07-25 01:28:53.772446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.408 [2024-07-25 01:28:53.775130] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.408 [2024-07-25 01:28:53.783641] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.408 [2024-07-25 01:28:53.784328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.408 [2024-07-25 01:28:53.784371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.408 [2024-07-25 01:28:53.784392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.408 [2024-07-25 01:28:53.784971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.408 [2024-07-25 01:28:53.785390] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.408 [2024-07-25 01:28:53.785398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.408 [2024-07-25 01:28:53.785408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.408 [2024-07-25 01:28:53.788095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.408 [2024-07-25 01:28:53.796527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.408 [2024-07-25 01:28:53.797135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.408 [2024-07-25 01:28:53.797178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.408 [2024-07-25 01:28:53.797200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.408 [2024-07-25 01:28:53.797778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.408 [2024-07-25 01:28:53.798379] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.408 [2024-07-25 01:28:53.798388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.408 [2024-07-25 01:28:53.798394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.408 [2024-07-25 01:28:53.801076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.408 [2024-07-25 01:28:53.809341] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.408 [2024-07-25 01:28:53.810059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.408 [2024-07-25 01:28:53.810102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.408 [2024-07-25 01:28:53.810123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.408 [2024-07-25 01:28:53.810465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.408 [2024-07-25 01:28:53.810628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.408 [2024-07-25 01:28:53.810636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.408 [2024-07-25 01:28:53.810641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.408 [2024-07-25 01:28:53.813322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.408 [2024-07-25 01:28:53.822208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.408 [2024-07-25 01:28:53.822907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.408 [2024-07-25 01:28:53.822948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.408 [2024-07-25 01:28:53.822969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.408 [2024-07-25 01:28:53.823504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.408 [2024-07-25 01:28:53.823676] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.408 [2024-07-25 01:28:53.823684] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.408 [2024-07-25 01:28:53.823691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.408 [2024-07-25 01:28:53.826493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.408 [2024-07-25 01:28:53.835002] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.408 [2024-07-25 01:28:53.835706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.408 [2024-07-25 01:28:53.835748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.408 [2024-07-25 01:28:53.835769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.408 [2024-07-25 01:28:53.836220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.408 [2024-07-25 01:28:53.836392] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.409 [2024-07-25 01:28:53.836400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.409 [2024-07-25 01:28:53.836407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.409 [2024-07-25 01:28:53.839066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.409 [2024-07-25 01:28:53.847856] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.409 [2024-07-25 01:28:53.848550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.409 [2024-07-25 01:28:53.848584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.409 [2024-07-25 01:28:53.848607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.409 [2024-07-25 01:28:53.849179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.409 [2024-07-25 01:28:53.849352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.409 [2024-07-25 01:28:53.849360] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.409 [2024-07-25 01:28:53.849365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.409 [2024-07-25 01:28:53.852051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.409 [2024-07-25 01:28:53.860705] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.409 [2024-07-25 01:28:53.861425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.409 [2024-07-25 01:28:53.861470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.409 [2024-07-25 01:28:53.861491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.409 [2024-07-25 01:28:53.862083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.409 [2024-07-25 01:28:53.862552] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.409 [2024-07-25 01:28:53.862561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.409 [2024-07-25 01:28:53.862568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.409 [2024-07-25 01:28:53.865275] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.409 [2024-07-25 01:28:53.873838] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.409 [2024-07-25 01:28:53.874445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.409 [2024-07-25 01:28:53.874461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.409 [2024-07-25 01:28:53.874468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.409 [2024-07-25 01:28:53.874648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.409 [2024-07-25 01:28:53.874828] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.409 [2024-07-25 01:28:53.874836] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.409 [2024-07-25 01:28:53.874842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.409 [2024-07-25 01:28:53.877557] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.409 [2024-07-25 01:28:53.886752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.409 [2024-07-25 01:28:53.887365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.409 [2024-07-25 01:28:53.887381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.409 [2024-07-25 01:28:53.887388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.409 [2024-07-25 01:28:53.887558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.409 [2024-07-25 01:28:53.887730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.409 [2024-07-25 01:28:53.887737] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.409 [2024-07-25 01:28:53.887743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.409 [2024-07-25 01:28:53.890589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.670 [2024-07-25 01:28:53.899779] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.670 [2024-07-25 01:28:53.900498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.670 [2024-07-25 01:28:53.900541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.670 [2024-07-25 01:28:53.900563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.670 [2024-07-25 01:28:53.901155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.670 [2024-07-25 01:28:53.901737] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.670 [2024-07-25 01:28:53.901761] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.670 [2024-07-25 01:28:53.901781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.670 [2024-07-25 01:28:53.904716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.670 [2024-07-25 01:28:53.912827] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.670 [2024-07-25 01:28:53.913546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.670 [2024-07-25 01:28:53.913562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.670 [2024-07-25 01:28:53.913569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.670 [2024-07-25 01:28:53.913740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.670 [2024-07-25 01:28:53.913910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.670 [2024-07-25 01:28:53.913918] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.670 [2024-07-25 01:28:53.913931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.670 [2024-07-25 01:28:53.916717] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.670 [2024-07-25 01:28:53.925685] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.670 [2024-07-25 01:28:53.926376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.670 [2024-07-25 01:28:53.926420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.670 [2024-07-25 01:28:53.926442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.670 [2024-07-25 01:28:53.926869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.670 [2024-07-25 01:28:53.927130] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.670 [2024-07-25 01:28:53.927141] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.670 [2024-07-25 01:28:53.927150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.670 [2024-07-25 01:28:53.931214] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.670 [2024-07-25 01:28:53.939119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.670 [2024-07-25 01:28:53.939820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.670 [2024-07-25 01:28:53.939875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.670 [2024-07-25 01:28:53.939897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.670 [2024-07-25 01:28:53.940432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.670 [2024-07-25 01:28:53.940604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.670 [2024-07-25 01:28:53.940612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.670 [2024-07-25 01:28:53.940618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.670 [2024-07-25 01:28:53.943369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.670 [2024-07-25 01:28:53.952012] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.670 [2024-07-25 01:28:53.952699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.670 [2024-07-25 01:28:53.952741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.670 [2024-07-25 01:28:53.952762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.670 [2024-07-25 01:28:53.953405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.670 [2024-07-25 01:28:53.953861] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.670 [2024-07-25 01:28:53.953869] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.670 [2024-07-25 01:28:53.953875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.670 [2024-07-25 01:28:53.956561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.670 [2024-07-25 01:28:53.964947] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.670 [2024-07-25 01:28:53.965374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.670 [2024-07-25 01:28:53.965394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.670 [2024-07-25 01:28:53.965401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.670 [2024-07-25 01:28:53.965572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.670 [2024-07-25 01:28:53.965743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.670 [2024-07-25 01:28:53.965751] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.670 [2024-07-25 01:28:53.965757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.670 [2024-07-25 01:28:53.968513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.670 [2024-07-25 01:28:53.977760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.670 [2024-07-25 01:28:53.978428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.670 [2024-07-25 01:28:53.978470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.670 [2024-07-25 01:28:53.978492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.670 [2024-07-25 01:28:53.978847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.670 [2024-07-25 01:28:53.979010] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.670 [2024-07-25 01:28:53.979018] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.670 [2024-07-25 01:28:53.979023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.670 [2024-07-25 01:28:53.981724] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.670 [2024-07-25 01:28:53.990624] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.670 [2024-07-25 01:28:53.991283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.670 [2024-07-25 01:28:53.991326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.670 [2024-07-25 01:28:53.991348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.670 [2024-07-25 01:28:53.991839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.670 [2024-07-25 01:28:53.992001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.670 [2024-07-25 01:28:53.992008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.670 [2024-07-25 01:28:53.992014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.670 [2024-07-25 01:28:53.994718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.670 [2024-07-25 01:28:54.003466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.671 [2024-07-25 01:28:54.004116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-07-25 01:28:54.004157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.671 [2024-07-25 01:28:54.004179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.671 [2024-07-25 01:28:54.004755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.671 [2024-07-25 01:28:54.005048] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.671 [2024-07-25 01:28:54.005057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.671 [2024-07-25 01:28:54.005062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.671 [2024-07-25 01:28:54.007761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.671 [2024-07-25 01:28:54.016275] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.671 [2024-07-25 01:28:54.016957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-07-25 01:28:54.017000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.671 [2024-07-25 01:28:54.017022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.671 [2024-07-25 01:28:54.017614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.671 [2024-07-25 01:28:54.018204] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.671 [2024-07-25 01:28:54.018230] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.671 [2024-07-25 01:28:54.018239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.671 [2024-07-25 01:28:54.022300] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.671 [2024-07-25 01:28:54.029953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.671 [2024-07-25 01:28:54.030651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-07-25 01:28:54.030694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.671 [2024-07-25 01:28:54.030715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.671 [2024-07-25 01:28:54.031306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.671 [2024-07-25 01:28:54.031829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.671 [2024-07-25 01:28:54.031837] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.671 [2024-07-25 01:28:54.031843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.671 [2024-07-25 01:28:54.034567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.671 [2024-07-25 01:28:54.042883] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.671 [2024-07-25 01:28:54.043596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-07-25 01:28:54.043639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.671 [2024-07-25 01:28:54.043660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.671 [2024-07-25 01:28:54.044251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.671 [2024-07-25 01:28:54.044845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.671 [2024-07-25 01:28:54.044853] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.671 [2024-07-25 01:28:54.044859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.671 [2024-07-25 01:28:54.047540] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.671 [2024-07-25 01:28:54.055730] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.671 [2024-07-25 01:28:54.056436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-07-25 01:28:54.056479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.671 [2024-07-25 01:28:54.056501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.671 [2024-07-25 01:28:54.057095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.671 [2024-07-25 01:28:54.057407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.671 [2024-07-25 01:28:54.057415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.671 [2024-07-25 01:28:54.057421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.671 [2024-07-25 01:28:54.060163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.671 [2024-07-25 01:28:54.068676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.671 [2024-07-25 01:28:54.069367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-07-25 01:28:54.069411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.671 [2024-07-25 01:28:54.069433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.671 [2024-07-25 01:28:54.069772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.671 [2024-07-25 01:28:54.069944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.671 [2024-07-25 01:28:54.069952] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.671 [2024-07-25 01:28:54.069958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.671 [2024-07-25 01:28:54.072649] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.671 [2024-07-25 01:28:54.081592] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.671 [2024-07-25 01:28:54.082288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-07-25 01:28:54.082331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.671 [2024-07-25 01:28:54.082353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.671 [2024-07-25 01:28:54.082929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.671 [2024-07-25 01:28:54.083217] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.671 [2024-07-25 01:28:54.083225] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.671 [2024-07-25 01:28:54.083231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.671 [2024-07-25 01:28:54.085913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.671 [2024-07-25 01:28:54.094503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.671 [2024-07-25 01:28:54.095209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-07-25 01:28:54.095252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.671 [2024-07-25 01:28:54.095281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.671 [2024-07-25 01:28:54.095761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.671 [2024-07-25 01:28:54.095933] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.671 [2024-07-25 01:28:54.095940] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.671 [2024-07-25 01:28:54.095946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.671 [2024-07-25 01:28:54.098663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.671 [2024-07-25 01:28:54.107311] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.671 [2024-07-25 01:28:54.107992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-07-25 01:28:54.108008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.671 [2024-07-25 01:28:54.108014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.671 [2024-07-25 01:28:54.108204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.671 [2024-07-25 01:28:54.108377] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.671 [2024-07-25 01:28:54.108384] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.671 [2024-07-25 01:28:54.108390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.671 [2024-07-25 01:28:54.111081] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.671 [2024-07-25 01:28:54.120125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.671 [2024-07-25 01:28:54.120749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-07-25 01:28:54.120764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.671 [2024-07-25 01:28:54.120771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.671 [2024-07-25 01:28:54.120942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.671 [2024-07-25 01:28:54.121119] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.672 [2024-07-25 01:28:54.121128] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.672 [2024-07-25 01:28:54.121134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.672 [2024-07-25 01:28:54.123814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.672 [2024-07-25 01:28:54.133023] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.672 [2024-07-25 01:28:54.133758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.672 [2024-07-25 01:28:54.133801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.672 [2024-07-25 01:28:54.133823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.672 [2024-07-25 01:28:54.134339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.672 [2024-07-25 01:28:54.134512] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.672 [2024-07-25 01:28:54.134523] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.672 [2024-07-25 01:28:54.134529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.672 [2024-07-25 01:28:54.137215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.672 [2024-07-25 01:28:54.146129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.672 [2024-07-25 01:28:54.146846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.672 [2024-07-25 01:28:54.146862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.672 [2024-07-25 01:28:54.146869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.672 [2024-07-25 01:28:54.147059] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.672 [2024-07-25 01:28:54.147251] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.672 [2024-07-25 01:28:54.147259] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.672 [2024-07-25 01:28:54.147265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.672 [2024-07-25 01:28:54.149958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.672 [2024-07-25 01:28:54.159140] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.672 [2024-07-25 01:28:54.159836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.672 [2024-07-25 01:28:54.159877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.672 [2024-07-25 01:28:54.159899] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.672 [2024-07-25 01:28:54.160265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.672 [2024-07-25 01:28:54.160443] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.672 [2024-07-25 01:28:54.160451] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.672 [2024-07-25 01:28:54.160457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.932 [2024-07-25 01:28:54.163250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.932 [2024-07-25 01:28:54.171997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.932 [2024-07-25 01:28:54.172464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.932 [2024-07-25 01:28:54.172508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.932 [2024-07-25 01:28:54.172530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.932 [2024-07-25 01:28:54.173070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.932 [2024-07-25 01:28:54.173243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.932 [2024-07-25 01:28:54.173251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.932 [2024-07-25 01:28:54.173258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.932 [2024-07-25 01:28:54.175940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.932 [2024-07-25 01:28:54.184838] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.932 [2024-07-25 01:28:54.185535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.932 [2024-07-25 01:28:54.185577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.932 [2024-07-25 01:28:54.185598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.932 [2024-07-25 01:28:54.186190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.932 [2024-07-25 01:28:54.186445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.932 [2024-07-25 01:28:54.186453] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.932 [2024-07-25 01:28:54.186459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.932 [2024-07-25 01:28:54.189145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.932 [2024-07-25 01:28:54.197665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.932 [2024-07-25 01:28:54.198371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.932 [2024-07-25 01:28:54.198413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.932 [2024-07-25 01:28:54.198435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.932 [2024-07-25 01:28:54.199013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.932 [2024-07-25 01:28:54.199471] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.932 [2024-07-25 01:28:54.199480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.932 [2024-07-25 01:28:54.199485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.932 [2024-07-25 01:28:54.202178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.932 [2024-07-25 01:28:54.210646] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.932 [2024-07-25 01:28:54.211325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.932 [2024-07-25 01:28:54.211368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.932 [2024-07-25 01:28:54.211390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.932 [2024-07-25 01:28:54.211968] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.932 [2024-07-25 01:28:54.212368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.932 [2024-07-25 01:28:54.212376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.932 [2024-07-25 01:28:54.212382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.932 [2024-07-25 01:28:54.215135] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.932 [2024-07-25 01:28:54.223628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.932 [2024-07-25 01:28:54.224317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.932 [2024-07-25 01:28:54.224360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.932 [2024-07-25 01:28:54.224382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.932 [2024-07-25 01:28:54.224974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.932 [2024-07-25 01:28:54.225196] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.932 [2024-07-25 01:28:54.225205] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.932 [2024-07-25 01:28:54.225210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.932 [2024-07-25 01:28:54.227955] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.932 [2024-07-25 01:28:54.236568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.932 [2024-07-25 01:28:54.237245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.932 [2024-07-25 01:28:54.237261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.932 [2024-07-25 01:28:54.237268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.932 [2024-07-25 01:28:54.237439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.932 [2024-07-25 01:28:54.237610] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.932 [2024-07-25 01:28:54.237617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.932 [2024-07-25 01:28:54.237623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.932 [2024-07-25 01:28:54.240327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.932 [2024-07-25 01:28:54.249422] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.932 [2024-07-25 01:28:54.250094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.932 [2024-07-25 01:28:54.250137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.932 [2024-07-25 01:28:54.250159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.932 [2024-07-25 01:28:54.250598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.932 [2024-07-25 01:28:54.250852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.932 [2024-07-25 01:28:54.250863] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.932 [2024-07-25 01:28:54.250871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.932 [2024-07-25 01:28:54.254941] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.932 [2024-07-25 01:28:54.262582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.932 [2024-07-25 01:28:54.263202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.932 [2024-07-25 01:28:54.263244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.932 [2024-07-25 01:28:54.263265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.932 [2024-07-25 01:28:54.263661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.932 [2024-07-25 01:28:54.263833] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.932 [2024-07-25 01:28:54.263841] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.932 [2024-07-25 01:28:54.263850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.932 [2024-07-25 01:28:54.266570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.932 [2024-07-25 01:28:54.275499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.932 [2024-07-25 01:28:54.276158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.932 [2024-07-25 01:28:54.276174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.932 [2024-07-25 01:28:54.276180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.932 [2024-07-25 01:28:54.276342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.932 [2024-07-25 01:28:54.276504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.932 [2024-07-25 01:28:54.276511] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.932 [2024-07-25 01:28:54.276517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.932 [2024-07-25 01:28:54.279210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.932 [2024-07-25 01:28:54.288291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.932 [2024-07-25 01:28:54.288991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.932 [2024-07-25 01:28:54.289032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.932 [2024-07-25 01:28:54.289068] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.932 [2024-07-25 01:28:54.289582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.932 [2024-07-25 01:28:54.289754] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.932 [2024-07-25 01:28:54.289762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.932 [2024-07-25 01:28:54.289768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.932 [2024-07-25 01:28:54.292453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.932 [2024-07-25 01:28:54.301177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.932 [2024-07-25 01:28:54.301863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.932 [2024-07-25 01:28:54.301905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.932 [2024-07-25 01:28:54.301926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.932 [2024-07-25 01:28:54.302520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.932 [2024-07-25 01:28:54.302979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.932 [2024-07-25 01:28:54.302987] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.932 [2024-07-25 01:28:54.302992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.933 [2024-07-25 01:28:54.305695] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.933 [2024-07-25 01:28:54.314089] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.933 [2024-07-25 01:28:54.314769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.933 [2024-07-25 01:28:54.314784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.933 [2024-07-25 01:28:54.314790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.933 [2024-07-25 01:28:54.314953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.933 [2024-07-25 01:28:54.315140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.933 [2024-07-25 01:28:54.315148] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.933 [2024-07-25 01:28:54.315154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.933 [2024-07-25 01:28:54.317838] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.933 [2024-07-25 01:28:54.326891] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.933 [2024-07-25 01:28:54.327502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.933 [2024-07-25 01:28:54.327518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.933 [2024-07-25 01:28:54.327525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.933 [2024-07-25 01:28:54.327695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.933 [2024-07-25 01:28:54.327867] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.933 [2024-07-25 01:28:54.327874] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.933 [2024-07-25 01:28:54.327880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.933 [2024-07-25 01:28:54.330573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.933 [2024-07-25 01:28:54.339876] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.933 [2024-07-25 01:28:54.340566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.933 [2024-07-25 01:28:54.340608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.933 [2024-07-25 01:28:54.340629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.933 [2024-07-25 01:28:54.341223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.933 [2024-07-25 01:28:54.341720] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.933 [2024-07-25 01:28:54.341731] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.933 [2024-07-25 01:28:54.341740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.933 [2024-07-25 01:28:54.345804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.933 [2024-07-25 01:28:54.353580] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.933 [2024-07-25 01:28:54.354220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.933 [2024-07-25 01:28:54.354264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.933 [2024-07-25 01:28:54.354285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.933 [2024-07-25 01:28:54.354865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.933 [2024-07-25 01:28:54.355038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.933 [2024-07-25 01:28:54.355051] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.933 [2024-07-25 01:28:54.355057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.933 [2024-07-25 01:28:54.357802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.933 [2024-07-25 01:28:54.366520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.933 [2024-07-25 01:28:54.367239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.933 [2024-07-25 01:28:54.367282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.933 [2024-07-25 01:28:54.367304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.933 [2024-07-25 01:28:54.367881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.933 [2024-07-25 01:28:54.368213] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.933 [2024-07-25 01:28:54.368222] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.933 [2024-07-25 01:28:54.368228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.933 [2024-07-25 01:28:54.370921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.933 [2024-07-25 01:28:54.379454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.933 [2024-07-25 01:28:54.380151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.933 [2024-07-25 01:28:54.380193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.933 [2024-07-25 01:28:54.380215] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.933 [2024-07-25 01:28:54.380605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.933 [2024-07-25 01:28:54.380780] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.933 [2024-07-25 01:28:54.380789] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.933 [2024-07-25 01:28:54.380795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.933 [2024-07-25 01:28:54.383455] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.933 [2024-07-25 01:28:54.392450] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.933 [2024-07-25 01:28:54.393085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.933 [2024-07-25 01:28:54.393128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.933 [2024-07-25 01:28:54.393150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.933 [2024-07-25 01:28:54.393728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.933 [2024-07-25 01:28:54.394038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.933 [2024-07-25 01:28:54.394055] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.933 [2024-07-25 01:28:54.394061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.933 [2024-07-25 01:28:54.396894] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.933 [2024-07-25 01:28:54.405437] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.933 [2024-07-25 01:28:54.406133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.933 [2024-07-25 01:28:54.406177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.933 [2024-07-25 01:28:54.406199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.933 [2024-07-25 01:28:54.406529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.933 [2024-07-25 01:28:54.406694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.933 [2024-07-25 01:28:54.406703] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.933 [2024-07-25 01:28:54.406709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.933 [2024-07-25 01:28:54.409456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.933 [2024-07-25 01:28:54.418317] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.933 [2024-07-25 01:28:54.419034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.933 [2024-07-25 01:28:54.419088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:31.933 [2024-07-25 01:28:54.419109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:31.933 [2024-07-25 01:28:54.419680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:31.933 [2024-07-25 01:28:54.419855] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.933 [2024-07-25 01:28:54.419864] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.933 [2024-07-25 01:28:54.419870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.933 [2024-07-25 01:28:54.422798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.193 [2024-07-25 01:28:54.431332] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.193 [2024-07-25 01:28:54.432020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.193 [2024-07-25 01:28:54.432076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.193 [2024-07-25 01:28:54.432098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.193 [2024-07-25 01:28:54.432662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.193 [2024-07-25 01:28:54.432887] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.193 [2024-07-25 01:28:54.432899] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.193 [2024-07-25 01:28:54.432909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.193 [2024-07-25 01:28:54.436976] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.193 [2024-07-25 01:28:54.444897] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.193 [2024-07-25 01:28:54.445594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.194 [2024-07-25 01:28:54.445645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.194 [2024-07-25 01:28:54.445668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.194 [2024-07-25 01:28:54.446029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.194 [2024-07-25 01:28:54.446222] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.194 [2024-07-25 01:28:54.446232] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.194 [2024-07-25 01:28:54.446239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.194 [2024-07-25 01:28:54.448947] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.194 [2024-07-25 01:28:54.457715] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.194 [2024-07-25 01:28:54.458376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.194 [2024-07-25 01:28:54.458393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.194 [2024-07-25 01:28:54.458399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.194 [2024-07-25 01:28:54.458561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.194 [2024-07-25 01:28:54.458724] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.194 [2024-07-25 01:28:54.458732] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.194 [2024-07-25 01:28:54.458739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.194 [2024-07-25 01:28:54.461430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.194 [2024-07-25 01:28:54.470757] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.194 [2024-07-25 01:28:54.471449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.194 [2024-07-25 01:28:54.471494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.194 [2024-07-25 01:28:54.471516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.194 [2024-07-25 01:28:54.471862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.194 [2024-07-25 01:28:54.472027] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.194 [2024-07-25 01:28:54.472036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.194 [2024-07-25 01:28:54.472048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.194 [2024-07-25 01:28:54.474739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.194 [2024-07-25 01:28:54.483575] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.194 [2024-07-25 01:28:54.484278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.194 [2024-07-25 01:28:54.484322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.194 [2024-07-25 01:28:54.484344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.194 [2024-07-25 01:28:54.484760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.194 [2024-07-25 01:28:54.484928] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.194 [2024-07-25 01:28:54.484937] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.194 [2024-07-25 01:28:54.484943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.194 [2024-07-25 01:28:54.487637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.194 [2024-07-25 01:28:54.496402] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.194 [2024-07-25 01:28:54.497072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.194 [2024-07-25 01:28:54.497116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.194 [2024-07-25 01:28:54.497138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.194 [2024-07-25 01:28:54.497718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.194 [2024-07-25 01:28:54.497948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.194 [2024-07-25 01:28:54.497957] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.194 [2024-07-25 01:28:54.497963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.194 [2024-07-25 01:28:54.500659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.194 [2024-07-25 01:28:54.509195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.194 [2024-07-25 01:28:54.509880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.194 [2024-07-25 01:28:54.509923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.194 [2024-07-25 01:28:54.509944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.194 [2024-07-25 01:28:54.510237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.194 [2024-07-25 01:28:54.510402] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.194 [2024-07-25 01:28:54.510411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.194 [2024-07-25 01:28:54.510417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.194 [2024-07-25 01:28:54.513078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.194 [2024-07-25 01:28:54.522071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.194 [2024-07-25 01:28:54.522745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.194 [2024-07-25 01:28:54.522761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.194 [2024-07-25 01:28:54.522768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.194 [2024-07-25 01:28:54.522930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.194 [2024-07-25 01:28:54.523118] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.194 [2024-07-25 01:28:54.523127] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.194 [2024-07-25 01:28:54.523134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.194 [2024-07-25 01:28:54.525806] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.194 [2024-07-25 01:28:54.534955] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.194 [2024-07-25 01:28:54.535649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.194 [2024-07-25 01:28:54.535692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.194 [2024-07-25 01:28:54.535714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.194 [2024-07-25 01:28:54.536151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.194 [2024-07-25 01:28:54.536327] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.194 [2024-07-25 01:28:54.536336] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.194 [2024-07-25 01:28:54.536342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.194 [2024-07-25 01:28:54.539001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.194 [2024-07-25 01:28:54.547836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.194 [2024-07-25 01:28:54.548525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.194 [2024-07-25 01:28:54.548567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.194 [2024-07-25 01:28:54.548588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.194 [2024-07-25 01:28:54.548975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.194 [2024-07-25 01:28:54.549166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.194 [2024-07-25 01:28:54.549176] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.194 [2024-07-25 01:28:54.549183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.194 [2024-07-25 01:28:54.551849] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.194 [2024-07-25 01:28:54.560695] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.194 [2024-07-25 01:28:54.561322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.194 [2024-07-25 01:28:54.561365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.194 [2024-07-25 01:28:54.561386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.194 [2024-07-25 01:28:54.561964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.194 [2024-07-25 01:28:54.562154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.194 [2024-07-25 01:28:54.562162] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.194 [2024-07-25 01:28:54.562168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.194 [2024-07-25 01:28:54.564837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.194 [2024-07-25 01:28:54.573620] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.194 [2024-07-25 01:28:54.574304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.194 [2024-07-25 01:28:54.574345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.194 [2024-07-25 01:28:54.574379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.194 [2024-07-25 01:28:54.574945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.194 [2024-07-25 01:28:54.575132] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.194 [2024-07-25 01:28:54.575142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.194 [2024-07-25 01:28:54.575149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.194 [2024-07-25 01:28:54.577822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.194 [2024-07-25 01:28:54.586511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.194 [2024-07-25 01:28:54.587121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.194 [2024-07-25 01:28:54.587137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.194 [2024-07-25 01:28:54.587143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.194 [2024-07-25 01:28:54.587306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.194 [2024-07-25 01:28:54.587469] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.194 [2024-07-25 01:28:54.587477] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.194 [2024-07-25 01:28:54.587483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.194 [2024-07-25 01:28:54.590179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.194 [2024-07-25 01:28:54.599387] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.194 [2024-07-25 01:28:54.600077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.194 [2024-07-25 01:28:54.600121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.194 [2024-07-25 01:28:54.600143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.194 [2024-07-25 01:28:54.600475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.194 [2024-07-25 01:28:54.600640] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.194 [2024-07-25 01:28:54.600649] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.194 [2024-07-25 01:28:54.600655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.194 [2024-07-25 01:28:54.603252] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.194 [2024-07-25 01:28:54.612218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.194 [2024-07-25 01:28:54.612882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.194 [2024-07-25 01:28:54.612924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.194 [2024-07-25 01:28:54.612945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.194 [2024-07-25 01:28:54.613297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.194 [2024-07-25 01:28:54.613472] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.194 [2024-07-25 01:28:54.613485] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.194 [2024-07-25 01:28:54.613492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.194 [2024-07-25 01:28:54.616148] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.194 [2024-07-25 01:28:54.625120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.194 [2024-07-25 01:28:54.625722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.194 [2024-07-25 01:28:54.625764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.194 [2024-07-25 01:28:54.625786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.194 [2024-07-25 01:28:54.626382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.194 [2024-07-25 01:28:54.626768] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.194 [2024-07-25 01:28:54.626777] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.194 [2024-07-25 01:28:54.626783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.194 [2024-07-25 01:28:54.629419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.194 [2024-07-25 01:28:54.638254] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.194 [2024-07-25 01:28:54.638937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.194 [2024-07-25 01:28:54.638982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.194 [2024-07-25 01:28:54.639004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.194 [2024-07-25 01:28:54.639418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.194 [2024-07-25 01:28:54.639583] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.194 [2024-07-25 01:28:54.639592] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.194 [2024-07-25 01:28:54.639598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.194 [2024-07-25 01:28:54.642280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.194 [2024-07-25 01:28:54.651095] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.194 [2024-07-25 01:28:54.651732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.194 [2024-07-25 01:28:54.651749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.194 [2024-07-25 01:28:54.651757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.194 [2024-07-25 01:28:54.651929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.194 [2024-07-25 01:28:54.652105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.194 [2024-07-25 01:28:54.652116] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.194 [2024-07-25 01:28:54.652122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.194 [2024-07-25 01:28:54.654955] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.194 [2024-07-25 01:28:54.664180] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.194 [2024-07-25 01:28:54.665075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.194 [2024-07-25 01:28:54.665123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.194 [2024-07-25 01:28:54.665144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.194 [2024-07-25 01:28:54.665522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.194 [2024-07-25 01:28:54.665688] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.194 [2024-07-25 01:28:54.665697] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.194 [2024-07-25 01:28:54.665703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.194 [2024-07-25 01:28:54.669628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.194 [2024-07-25 01:28:54.677815] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.194 [2024-07-25 01:28:54.678699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.194 [2024-07-25 01:28:54.678715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.194 [2024-07-25 01:28:54.678741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.194 [2024-07-25 01:28:54.679315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.194 [2024-07-25 01:28:54.679495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.194 [2024-07-25 01:28:54.679505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.194 [2024-07-25 01:28:54.679511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.194 [2024-07-25 01:28:54.682366] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.454 [2024-07-25 01:28:54.690923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.454 [2024-07-25 01:28:54.691554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-07-25 01:28:54.691571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.454 [2024-07-25 01:28:54.691579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.454 [2024-07-25 01:28:54.691742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.455 [2024-07-25 01:28:54.691905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.455 [2024-07-25 01:28:54.691915] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.455 [2024-07-25 01:28:54.691920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.455 [2024-07-25 01:28:54.694671] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.455 [2024-07-25 01:28:54.703918] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.455 [2024-07-25 01:28:54.704523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-07-25 01:28:54.704540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.455 [2024-07-25 01:28:54.704548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.455 [2024-07-25 01:28:54.704723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.455 [2024-07-25 01:28:54.704896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.455 [2024-07-25 01:28:54.704905] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.455 [2024-07-25 01:28:54.704911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.455 [2024-07-25 01:28:54.707666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.455 [2024-07-25 01:28:54.716935] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.455 [2024-07-25 01:28:54.717643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-07-25 01:28:54.717686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.455 [2024-07-25 01:28:54.717708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.455 [2024-07-25 01:28:54.718299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.455 [2024-07-25 01:28:54.718747] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.455 [2024-07-25 01:28:54.718756] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.455 [2024-07-25 01:28:54.718762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.455 [2024-07-25 01:28:54.721535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.455 [2024-07-25 01:28:54.729888] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.455 [2024-07-25 01:28:54.730517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-07-25 01:28:54.730561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.455 [2024-07-25 01:28:54.730584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.455 [2024-07-25 01:28:54.730871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.455 [2024-07-25 01:28:54.731049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.455 [2024-07-25 01:28:54.731059] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.455 [2024-07-25 01:28:54.731066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.455 [2024-07-25 01:28:54.733685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.455 [2024-07-25 01:28:54.742884] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.455 [2024-07-25 01:28:54.743495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-07-25 01:28:54.743551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.455 [2024-07-25 01:28:54.743573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.455 [2024-07-25 01:28:54.744163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.455 [2024-07-25 01:28:54.744439] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.455 [2024-07-25 01:28:54.744449] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.455 [2024-07-25 01:28:54.744458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.455 [2024-07-25 01:28:54.747114] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.455 [2024-07-25 01:28:54.755789] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.455 [2024-07-25 01:28:54.756411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-07-25 01:28:54.756456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.455 [2024-07-25 01:28:54.756478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.455 [2024-07-25 01:28:54.757066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.455 [2024-07-25 01:28:54.757652] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.455 [2024-07-25 01:28:54.757661] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.455 [2024-07-25 01:28:54.757667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.455 [2024-07-25 01:28:54.760317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.455 [2024-07-25 01:28:54.768698] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.455 [2024-07-25 01:28:54.769356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-07-25 01:28:54.769396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.455 [2024-07-25 01:28:54.769419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.455 [2024-07-25 01:28:54.769998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.455 [2024-07-25 01:28:54.770591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.455 [2024-07-25 01:28:54.770626] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.455 [2024-07-25 01:28:54.770633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.455 [2024-07-25 01:28:54.773277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.455 [2024-07-25 01:28:54.781639] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.455 [2024-07-25 01:28:54.782291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-07-25 01:28:54.782334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.455 [2024-07-25 01:28:54.782356] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.455 [2024-07-25 01:28:54.782749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.455 [2024-07-25 01:28:54.782913] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.455 [2024-07-25 01:28:54.782922] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.455 [2024-07-25 01:28:54.782928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.455 [2024-07-25 01:28:54.785576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.455 [2024-07-25 01:28:54.794555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.455 [2024-07-25 01:28:54.795292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-07-25 01:28:54.795334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.455 [2024-07-25 01:28:54.795355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.455 [2024-07-25 01:28:54.795672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.455 [2024-07-25 01:28:54.795837] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.455 [2024-07-25 01:28:54.795847] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.455 [2024-07-25 01:28:54.795853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.455 [2024-07-25 01:28:54.798491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.455 [2024-07-25 01:28:54.807487] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.455 [2024-07-25 01:28:54.808181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-07-25 01:28:54.808198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.455 [2024-07-25 01:28:54.808205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.455 [2024-07-25 01:28:54.808380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.455 [2024-07-25 01:28:54.808543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.455 [2024-07-25 01:28:54.808553] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.455 [2024-07-25 01:28:54.808559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.455 [2024-07-25 01:28:54.811185] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.455 [2024-07-25 01:28:54.820511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.455 [2024-07-25 01:28:54.821213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-07-25 01:28:54.821257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.455 [2024-07-25 01:28:54.821278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.455 [2024-07-25 01:28:54.821596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.455 [2024-07-25 01:28:54.821759] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.455 [2024-07-25 01:28:54.821768] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.455 [2024-07-25 01:28:54.821774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.455 [2024-07-25 01:28:54.824411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.455 [2024-07-25 01:28:54.833404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.455 [2024-07-25 01:28:54.834164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-07-25 01:28:54.834207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.455 [2024-07-25 01:28:54.834228] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.455 [2024-07-25 01:28:54.834807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.455 [2024-07-25 01:28:54.835010] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.455 [2024-07-25 01:28:54.835019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.455 [2024-07-25 01:28:54.835025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.455 [2024-07-25 01:28:54.837676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.455 [2024-07-25 01:28:54.846331] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.455 [2024-07-25 01:28:54.846945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-07-25 01:28:54.846989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.455 [2024-07-25 01:28:54.847011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.455 [2024-07-25 01:28:54.847518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.455 [2024-07-25 01:28:54.847773] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.455 [2024-07-25 01:28:54.847785] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.455 [2024-07-25 01:28:54.847795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.455 [2024-07-25 01:28:54.851862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.455 [2024-07-25 01:28:54.859716] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.455 [2024-07-25 01:28:54.860385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-07-25 01:28:54.860428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.455 [2024-07-25 01:28:54.860450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.455 [2024-07-25 01:28:54.861021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.455 [2024-07-25 01:28:54.861198] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.455 [2024-07-25 01:28:54.861208] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.455 [2024-07-25 01:28:54.861214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.455 [2024-07-25 01:28:54.863959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.455 [2024-07-25 01:28:54.872767] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.455 [2024-07-25 01:28:54.873432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-07-25 01:28:54.873474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.455 [2024-07-25 01:28:54.873496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.455 [2024-07-25 01:28:54.873963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.455 [2024-07-25 01:28:54.874134] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.455 [2024-07-25 01:28:54.874143] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.455 [2024-07-25 01:28:54.874149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.455 [2024-07-25 01:28:54.876808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.455 [2024-07-25 01:28:54.885642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.455 [2024-07-25 01:28:54.886328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-07-25 01:28:54.886371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.455 [2024-07-25 01:28:54.886394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.455 [2024-07-25 01:28:54.886974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.455 [2024-07-25 01:28:54.887251] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.455 [2024-07-25 01:28:54.887260] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.455 [2024-07-25 01:28:54.887267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.455 [2024-07-25 01:28:54.890015] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.455 [2024-07-25 01:28:54.898622] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.455 [2024-07-25 01:28:54.899319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-07-25 01:28:54.899363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.455 [2024-07-25 01:28:54.899385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.455 [2024-07-25 01:28:54.899963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.455 [2024-07-25 01:28:54.900291] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.455 [2024-07-25 01:28:54.900301] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.455 [2024-07-25 01:28:54.900307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.455 [2024-07-25 01:28:54.903058] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.455 [2024-07-25 01:28:54.911734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.455 [2024-07-25 01:28:54.912476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-07-25 01:28:54.912493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.455 [2024-07-25 01:28:54.912500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.455 [2024-07-25 01:28:54.912671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.455 [2024-07-25 01:28:54.912844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.455 [2024-07-25 01:28:54.912852] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.455 [2024-07-25 01:28:54.912859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.455 [2024-07-25 01:28:54.915558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.455 [2024-07-25 01:28:54.924715] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.455 [2024-07-25 01:28:54.925456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-07-25 01:28:54.925501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.455 [2024-07-25 01:28:54.925530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.455 [2024-07-25 01:28:54.925909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.455 [2024-07-25 01:28:54.926088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.456 [2024-07-25 01:28:54.926098] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.456 [2024-07-25 01:28:54.926105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.456 [2024-07-25 01:28:54.928717] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.456 [2024-07-25 01:28:54.937565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.456 [2024-07-25 01:28:54.938180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.456 [2024-07-25 01:28:54.938198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.456 [2024-07-25 01:28:54.938206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.456 [2024-07-25 01:28:54.938377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.456 [2024-07-25 01:28:54.938549] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.456 [2024-07-25 01:28:54.938559] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.456 [2024-07-25 01:28:54.938565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.456 [2024-07-25 01:28:54.941264] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.717 [2024-07-25 01:28:54.950615] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.717 [2024-07-25 01:28:54.951364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.717 [2024-07-25 01:28:54.951408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.717 [2024-07-25 01:28:54.951430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.717 [2024-07-25 01:28:54.952008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.717 [2024-07-25 01:28:54.952232] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.717 [2024-07-25 01:28:54.952241] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.717 [2024-07-25 01:28:54.952247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.717 [2024-07-25 01:28:54.954907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.717 [2024-07-25 01:28:54.963651] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.717 [2024-07-25 01:28:54.964322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.717 [2024-07-25 01:28:54.964366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.717 [2024-07-25 01:28:54.964389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.717 [2024-07-25 01:28:54.964967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.717 [2024-07-25 01:28:54.965139] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.717 [2024-07-25 01:28:54.965149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.717 [2024-07-25 01:28:54.965155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.717 [2024-07-25 01:28:54.967806] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.717 [2024-07-25 01:28:54.976500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.717 [2024-07-25 01:28:54.977225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.717 [2024-07-25 01:28:54.977268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.717 [2024-07-25 01:28:54.977289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.717 [2024-07-25 01:28:54.977869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.717 [2024-07-25 01:28:54.978130] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.717 [2024-07-25 01:28:54.978140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.717 [2024-07-25 01:28:54.978147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.717 [2024-07-25 01:28:54.980803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.717 [2024-07-25 01:28:54.989482] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.717 [2024-07-25 01:28:54.990185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.717 [2024-07-25 01:28:54.990230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.717 [2024-07-25 01:28:54.990252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.717 [2024-07-25 01:28:54.990830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.717 [2024-07-25 01:28:54.991097] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.717 [2024-07-25 01:28:54.991107] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.717 [2024-07-25 01:28:54.991113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.717 [2024-07-25 01:28:54.993819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.717 [2024-07-25 01:28:55.002485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.717 [2024-07-25 01:28:55.003097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.717 [2024-07-25 01:28:55.003140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.717 [2024-07-25 01:28:55.003163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.717 [2024-07-25 01:28:55.003742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.717 [2024-07-25 01:28:55.004047] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.717 [2024-07-25 01:28:55.004057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.717 [2024-07-25 01:28:55.004064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.717 [2024-07-25 01:28:55.006766] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.717 [2024-07-25 01:28:55.015443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.717 [2024-07-25 01:28:55.016157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.717 [2024-07-25 01:28:55.016200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.717 [2024-07-25 01:28:55.016221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.717 [2024-07-25 01:28:55.016798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.717 [2024-07-25 01:28:55.017144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.717 [2024-07-25 01:28:55.017153] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.717 [2024-07-25 01:28:55.017159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.717 [2024-07-25 01:28:55.019813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.717 [2024-07-25 01:28:55.028331] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.717 [2024-07-25 01:28:55.028936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.717 [2024-07-25 01:28:55.028979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.717 [2024-07-25 01:28:55.029002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.717 [2024-07-25 01:28:55.029462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.717 [2024-07-25 01:28:55.029718] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.717 [2024-07-25 01:28:55.029730] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.717 [2024-07-25 01:28:55.029739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.717 [2024-07-25 01:28:55.033809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.717 [2024-07-25 01:28:55.041617] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.717 [2024-07-25 01:28:55.042316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.717 [2024-07-25 01:28:55.042358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.717 [2024-07-25 01:28:55.042380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.717 [2024-07-25 01:28:55.042958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.717 [2024-07-25 01:28:55.043205] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.717 [2024-07-25 01:28:55.043214] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.717 [2024-07-25 01:28:55.043221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.717 [2024-07-25 01:28:55.045906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.717 [2024-07-25 01:28:55.054499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.717 [2024-07-25 01:28:55.055203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.717 [2024-07-25 01:28:55.055247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.717 [2024-07-25 01:28:55.055275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.718 [2024-07-25 01:28:55.055545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.718 [2024-07-25 01:28:55.055709] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.718 [2024-07-25 01:28:55.055718] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.718 [2024-07-25 01:28:55.055724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.718 [2024-07-25 01:28:55.058352] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.718 [2024-07-25 01:28:55.067493] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.718 [2024-07-25 01:28:55.068173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.718 [2024-07-25 01:28:55.068215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.718 [2024-07-25 01:28:55.068237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.718 [2024-07-25 01:28:55.068537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.718 [2024-07-25 01:28:55.068718] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.718 [2024-07-25 01:28:55.068728] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.718 [2024-07-25 01:28:55.068734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.718 [2024-07-25 01:28:55.071503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.718 [2024-07-25 01:28:55.080444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.718 [2024-07-25 01:28:55.081132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.718 [2024-07-25 01:28:55.081176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.718 [2024-07-25 01:28:55.081197] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.718 [2024-07-25 01:28:55.081554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.718 [2024-07-25 01:28:55.081718] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.718 [2024-07-25 01:28:55.081727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.718 [2024-07-25 01:28:55.081733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.718 [2024-07-25 01:28:55.084480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.718 [2024-07-25 01:28:55.093389] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.718 [2024-07-25 01:28:55.094063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.718 [2024-07-25 01:28:55.094079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.718 [2024-07-25 01:28:55.094085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.718 [2024-07-25 01:28:55.094247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.718 [2024-07-25 01:28:55.094410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.718 [2024-07-25 01:28:55.094422] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.718 [2024-07-25 01:28:55.094428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.718 [2024-07-25 01:28:55.097124] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.718 [2024-07-25 01:28:55.106261] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.718 [2024-07-25 01:28:55.106943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.718 [2024-07-25 01:28:55.106958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.718 [2024-07-25 01:28:55.106965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.718 [2024-07-25 01:28:55.107151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.718 [2024-07-25 01:28:55.107325] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.718 [2024-07-25 01:28:55.107333] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.718 [2024-07-25 01:28:55.107340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.718 [2024-07-25 01:28:55.109995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.718 [2024-07-25 01:28:55.119136] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.718 [2024-07-25 01:28:55.119820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.718 [2024-07-25 01:28:55.119863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.718 [2024-07-25 01:28:55.119885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.718 [2024-07-25 01:28:55.120378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.718 [2024-07-25 01:28:55.120608] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.718 [2024-07-25 01:28:55.120621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.718 [2024-07-25 01:28:55.120630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.718 [2024-07-25 01:28:55.124692] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.718 [2024-07-25 01:28:55.132530] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.718 [2024-07-25 01:28:55.133223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.718 [2024-07-25 01:28:55.133267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.718 [2024-07-25 01:28:55.133289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.718 [2024-07-25 01:28:55.133622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.718 [2024-07-25 01:28:55.133791] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.718 [2024-07-25 01:28:55.133800] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.718 [2024-07-25 01:28:55.133806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.718 [2024-07-25 01:28:55.136558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.718 [2024-07-25 01:28:55.145414] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.718 [2024-07-25 01:28:55.146026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.718 [2024-07-25 01:28:55.146047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.718 [2024-07-25 01:28:55.146054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.718 [2024-07-25 01:28:55.146217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.718 [2024-07-25 01:28:55.146380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.718 [2024-07-25 01:28:55.146388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.718 [2024-07-25 01:28:55.146394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.718 [2024-07-25 01:28:55.148985] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.718 [2024-07-25 01:28:55.158322] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.718 [2024-07-25 01:28:55.158986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.718 [2024-07-25 01:28:55.159027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.718 [2024-07-25 01:28:55.159062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.718 [2024-07-25 01:28:55.159642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.718 [2024-07-25 01:28:55.159911] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.718 [2024-07-25 01:28:55.159921] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.718 [2024-07-25 01:28:55.159927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.718 [2024-07-25 01:28:55.162779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.718 [2024-07-25 01:28:55.171395] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.718 [2024-07-25 01:28:55.172024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.718 [2024-07-25 01:28:55.172082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.718 [2024-07-25 01:28:55.172105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.718 [2024-07-25 01:28:55.172684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.718 [2024-07-25 01:28:55.173142] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.718 [2024-07-25 01:28:55.173152] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.718 [2024-07-25 01:28:55.173158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.718 [2024-07-25 01:28:55.175807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.718 [2024-07-25 01:28:55.184315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.718 [2024-07-25 01:28:55.185007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.718 [2024-07-25 01:28:55.185061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.718 [2024-07-25 01:28:55.185083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.718 [2024-07-25 01:28:55.185534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.718 [2024-07-25 01:28:55.185698] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.719 [2024-07-25 01:28:55.185708] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.719 [2024-07-25 01:28:55.185714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.719 [2024-07-25 01:28:55.188315] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.719 [2024-07-25 01:28:55.197201] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.719 [2024-07-25 01:28:55.197898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.719 [2024-07-25 01:28:55.197940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.719 [2024-07-25 01:28:55.197961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.719 [2024-07-25 01:28:55.198554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.719 [2024-07-25 01:28:55.199136] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.719 [2024-07-25 01:28:55.199146] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.719 [2024-07-25 01:28:55.199152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.719 [2024-07-25 01:28:55.201767] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.981 [2024-07-25 01:28:55.210294] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.981 [2024-07-25 01:28:55.210924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.981 [2024-07-25 01:28:55.210939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.981 [2024-07-25 01:28:55.210947] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.981 [2024-07-25 01:28:55.211134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.981 [2024-07-25 01:28:55.211307] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.981 [2024-07-25 01:28:55.211317] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.981 [2024-07-25 01:28:55.211323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.981 [2024-07-25 01:28:55.214080] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.981 [2024-07-25 01:28:55.223329] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.981 [2024-07-25 01:28:55.224001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.981 [2024-07-25 01:28:55.224053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.981 [2024-07-25 01:28:55.224077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.981 [2024-07-25 01:28:55.224594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.981 [2024-07-25 01:28:55.224759] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.981 [2024-07-25 01:28:55.224768] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.981 [2024-07-25 01:28:55.224782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.981 [2024-07-25 01:28:55.227530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.981 [2024-07-25 01:28:55.236172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.981 [2024-07-25 01:28:55.236880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.981 [2024-07-25 01:28:55.236924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.981 [2024-07-25 01:28:55.236946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.981 [2024-07-25 01:28:55.237536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.981 [2024-07-25 01:28:55.238005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.981 [2024-07-25 01:28:55.238014] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.981 [2024-07-25 01:28:55.238020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.981 [2024-07-25 01:28:55.240774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.981 [2024-07-25 01:28:55.249159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.981 [2024-07-25 01:28:55.249825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.981 [2024-07-25 01:28:55.249862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.981 [2024-07-25 01:28:55.249885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.981 [2024-07-25 01:28:55.250478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.981 [2024-07-25 01:28:55.250716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.981 [2024-07-25 01:28:55.250726] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.981 [2024-07-25 01:28:55.250732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.981 [2024-07-25 01:28:55.253482] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.981 [2024-07-25 01:28:55.262211] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.981 [2024-07-25 01:28:55.262895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.981 [2024-07-25 01:28:55.262911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.981 [2024-07-25 01:28:55.262919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.981 [2024-07-25 01:28:55.263114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.981 [2024-07-25 01:28:55.263293] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.981 [2024-07-25 01:28:55.263302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.981 [2024-07-25 01:28:55.263309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.981 [2024-07-25 01:28:55.266096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.981 [2024-07-25 01:28:55.275361] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.981 [2024-07-25 01:28:55.275987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.981 [2024-07-25 01:28:55.276007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.981 [2024-07-25 01:28:55.276013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.981 [2024-07-25 01:28:55.276211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.981 [2024-07-25 01:28:55.276390] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.981 [2024-07-25 01:28:55.276399] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.981 [2024-07-25 01:28:55.276406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.981 [2024-07-25 01:28:55.279190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.982 [2024-07-25 01:28:55.288451] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.982 [2024-07-25 01:28:55.289124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-07-25 01:28:55.289142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.982 [2024-07-25 01:28:55.289149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.982 [2024-07-25 01:28:55.289321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.982 [2024-07-25 01:28:55.289494] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.982 [2024-07-25 01:28:55.289503] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.982 [2024-07-25 01:28:55.289510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.982 [2024-07-25 01:28:55.292261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.982 [2024-07-25 01:28:55.301519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.982 [2024-07-25 01:28:55.302215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-07-25 01:28:55.302232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.982 [2024-07-25 01:28:55.302238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.982 [2024-07-25 01:28:55.302402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.982 [2024-07-25 01:28:55.302564] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.982 [2024-07-25 01:28:55.302571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.982 [2024-07-25 01:28:55.302577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.982 [2024-07-25 01:28:55.305318] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.982 [2024-07-25 01:28:55.314570] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.982 [2024-07-25 01:28:55.315234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-07-25 01:28:55.315251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.982 [2024-07-25 01:28:55.315258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.982 [2024-07-25 01:28:55.315429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.982 [2024-07-25 01:28:55.315605] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.982 [2024-07-25 01:28:55.315613] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.982 [2024-07-25 01:28:55.315619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.982 [2024-07-25 01:28:55.318370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.982 [2024-07-25 01:28:55.327609] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.982 [2024-07-25 01:28:55.328295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-07-25 01:28:55.328312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.982 [2024-07-25 01:28:55.328318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.982 [2024-07-25 01:28:55.328481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.982 [2024-07-25 01:28:55.328643] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.982 [2024-07-25 01:28:55.328652] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.982 [2024-07-25 01:28:55.328657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.982 [2024-07-25 01:28:55.331401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.982 [2024-07-25 01:28:55.340647] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.982 [2024-07-25 01:28:55.341274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-07-25 01:28:55.341290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.982 [2024-07-25 01:28:55.341297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.982 [2024-07-25 01:28:55.341468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.982 [2024-07-25 01:28:55.341640] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.982 [2024-07-25 01:28:55.341648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.982 [2024-07-25 01:28:55.341654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.982 [2024-07-25 01:28:55.344406] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.982 [2024-07-25 01:28:55.353681] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.982 [2024-07-25 01:28:55.354372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-07-25 01:28:55.354388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.982 [2024-07-25 01:28:55.354395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.982 [2024-07-25 01:28:55.354557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.982 [2024-07-25 01:28:55.354720] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.982 [2024-07-25 01:28:55.354728] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.982 [2024-07-25 01:28:55.354734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.982 [2024-07-25 01:28:55.357486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.982 [2024-07-25 01:28:55.366731] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.982 [2024-07-25 01:28:55.367420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-07-25 01:28:55.367436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.982 [2024-07-25 01:28:55.367442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.982 [2024-07-25 01:28:55.367604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.982 [2024-07-25 01:28:55.367767] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.982 [2024-07-25 01:28:55.367775] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.982 [2024-07-25 01:28:55.367781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.982 [2024-07-25 01:28:55.370563] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.982 [2024-07-25 01:28:55.379816] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.982 [2024-07-25 01:28:55.380507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-07-25 01:28:55.380524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.982 [2024-07-25 01:28:55.380532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.982 [2024-07-25 01:28:55.380694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.982 [2024-07-25 01:28:55.380857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.982 [2024-07-25 01:28:55.380866] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.982 [2024-07-25 01:28:55.380872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.982 [2024-07-25 01:28:55.383627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.982 [2024-07-25 01:28:55.392859] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.982 [2024-07-25 01:28:55.393559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-07-25 01:28:55.393576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.982 [2024-07-25 01:28:55.393583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.982 [2024-07-25 01:28:55.393755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.982 [2024-07-25 01:28:55.393927] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.982 [2024-07-25 01:28:55.393935] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.982 [2024-07-25 01:28:55.393941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.982 [2024-07-25 01:28:55.396689] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.982 [2024-07-25 01:28:55.405929] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.982 [2024-07-25 01:28:55.406636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.982 [2024-07-25 01:28:55.406652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.983 [2024-07-25 01:28:55.406662] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.983 [2024-07-25 01:28:55.406834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.983 [2024-07-25 01:28:55.407006] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.983 [2024-07-25 01:28:55.407014] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.983 [2024-07-25 01:28:55.407020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.983 [2024-07-25 01:28:55.409773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.983 [2024-07-25 01:28:55.419131] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.983 [2024-07-25 01:28:55.419833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.983 [2024-07-25 01:28:55.419850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.983 [2024-07-25 01:28:55.419858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.983 [2024-07-25 01:28:55.420035] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.983 [2024-07-25 01:28:55.420218] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.983 [2024-07-25 01:28:55.420228] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.983 [2024-07-25 01:28:55.420234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.983 [2024-07-25 01:28:55.423084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.983 [2024-07-25 01:28:55.432144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.983 [2024-07-25 01:28:55.432709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.983 [2024-07-25 01:28:55.432727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.983 [2024-07-25 01:28:55.432735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.983 [2024-07-25 01:28:55.432906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.983 [2024-07-25 01:28:55.433084] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.983 [2024-07-25 01:28:55.433093] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.983 [2024-07-25 01:28:55.433100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.983 [2024-07-25 01:28:55.435849] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.983 [2024-07-25 01:28:55.445110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.983 [2024-07-25 01:28:55.445746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.983 [2024-07-25 01:28:55.445763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.983 [2024-07-25 01:28:55.445771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.983 [2024-07-25 01:28:55.445943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.983 [2024-07-25 01:28:55.446123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.983 [2024-07-25 01:28:55.446137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.983 [2024-07-25 01:28:55.446144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.983 [2024-07-25 01:28:55.448890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.983 [2024-07-25 01:28:55.458144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.983 [2024-07-25 01:28:55.458833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.983 [2024-07-25 01:28:55.458849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:32.983 [2024-07-25 01:28:55.458856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:32.983 [2024-07-25 01:28:55.459028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:32.983 [2024-07-25 01:28:55.459206] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.983 [2024-07-25 01:28:55.459217] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.983 [2024-07-25 01:28:55.459224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.983 [2024-07-25 01:28:55.461970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.983 [2024-07-25 01:28:55.471194] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.244 [2024-07-25 01:28:55.471930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.244 [2024-07-25 01:28:55.471948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.244 [2024-07-25 01:28:55.471956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.244 [2024-07-25 01:28:55.472151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.244 [2024-07-25 01:28:55.472329] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.244 [2024-07-25 01:28:55.472340] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.244 [2024-07-25 01:28:55.472347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.244 [2024-07-25 01:28:55.475151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.244 [2024-07-25 01:28:55.484192] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.244 [2024-07-25 01:28:55.484858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.244 [2024-07-25 01:28:55.484875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.244 [2024-07-25 01:28:55.484883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.244 [2024-07-25 01:28:55.485060] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.244 [2024-07-25 01:28:55.485232] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.244 [2024-07-25 01:28:55.485242] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.244 [2024-07-25 01:28:55.485248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.244 [2024-07-25 01:28:55.487996] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.244 [2024-07-25 01:28:55.497149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.244 [2024-07-25 01:28:55.497850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.244 [2024-07-25 01:28:55.497867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.244 [2024-07-25 01:28:55.497874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.244 [2024-07-25 01:28:55.498051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.244 [2024-07-25 01:28:55.498224] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.244 [2024-07-25 01:28:55.498234] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.244 [2024-07-25 01:28:55.498240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.244 [2024-07-25 01:28:55.500984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.244 [2024-07-25 01:28:55.510236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.244 [2024-07-25 01:28:55.510903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.244 [2024-07-25 01:28:55.510920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.244 [2024-07-25 01:28:55.510928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.244 [2024-07-25 01:28:55.511104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.244 [2024-07-25 01:28:55.511278] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.244 [2024-07-25 01:28:55.511288] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.244 [2024-07-25 01:28:55.511294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.244 [2024-07-25 01:28:55.514039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.244 [2024-07-25 01:28:55.523288] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.244 [2024-07-25 01:28:55.523905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.244 [2024-07-25 01:28:55.523921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.244 [2024-07-25 01:28:55.523928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.244 [2024-07-25 01:28:55.524105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.244 [2024-07-25 01:28:55.524278] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.244 [2024-07-25 01:28:55.524288] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.244 [2024-07-25 01:28:55.524295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.244 [2024-07-25 01:28:55.527049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.244 [2024-07-25 01:28:55.536299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.244 [2024-07-25 01:28:55.536986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.244 [2024-07-25 01:28:55.537003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.244 [2024-07-25 01:28:55.537013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.244 [2024-07-25 01:28:55.537189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.244 [2024-07-25 01:28:55.537363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.244 [2024-07-25 01:28:55.537373] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.244 [2024-07-25 01:28:55.537379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.244 [2024-07-25 01:28:55.540127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.245 [2024-07-25 01:28:55.549370] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.245 [2024-07-25 01:28:55.550070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.245 [2024-07-25 01:28:55.550087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.245 [2024-07-25 01:28:55.550094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.245 [2024-07-25 01:28:55.550266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.245 [2024-07-25 01:28:55.550438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.245 [2024-07-25 01:28:55.550446] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.245 [2024-07-25 01:28:55.550453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.245 [2024-07-25 01:28:55.553242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.245 [2024-07-25 01:28:55.562345] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.245 [2024-07-25 01:28:55.563036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.245 [2024-07-25 01:28:55.563058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.245 [2024-07-25 01:28:55.563065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.245 [2024-07-25 01:28:55.563237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.245 [2024-07-25 01:28:55.563409] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.245 [2024-07-25 01:28:55.563418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.245 [2024-07-25 01:28:55.563425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.245 [2024-07-25 01:28:55.566176] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.245 [2024-07-25 01:28:55.575410] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.245 [2024-07-25 01:28:55.576117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.245 [2024-07-25 01:28:55.576134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.245 [2024-07-25 01:28:55.576141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.245 [2024-07-25 01:28:55.576318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.245 [2024-07-25 01:28:55.576481] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.245 [2024-07-25 01:28:55.576489] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.245 [2024-07-25 01:28:55.576497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.245 [2024-07-25 01:28:55.579236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.245 [2024-07-25 01:28:55.588490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.245 [2024-07-25 01:28:55.589179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.245 [2024-07-25 01:28:55.589207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.245 [2024-07-25 01:28:55.589214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.245 [2024-07-25 01:28:55.589377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.245 [2024-07-25 01:28:55.589539] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.245 [2024-07-25 01:28:55.589547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.245 [2024-07-25 01:28:55.589553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.245 [2024-07-25 01:28:55.592289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.245 [2024-07-25 01:28:55.601453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.245 [2024-07-25 01:28:55.602151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.245 [2024-07-25 01:28:55.602168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.245 [2024-07-25 01:28:55.602175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.245 [2024-07-25 01:28:55.602350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.245 [2024-07-25 01:28:55.602514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.245 [2024-07-25 01:28:55.602522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.245 [2024-07-25 01:28:55.602527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.245 [2024-07-25 01:28:55.605260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.245 [2024-07-25 01:28:55.614480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.245 [2024-07-25 01:28:55.615169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.245 [2024-07-25 01:28:55.615186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.245 [2024-07-25 01:28:55.615206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.245 [2024-07-25 01:28:55.615369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.245 [2024-07-25 01:28:55.615533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.245 [2024-07-25 01:28:55.615541] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.245 [2024-07-25 01:28:55.615547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.245 [2024-07-25 01:28:55.618291] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.245 [2024-07-25 01:28:55.627459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.245 [2024-07-25 01:28:55.628156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.245 [2024-07-25 01:28:55.628172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.245 [2024-07-25 01:28:55.628179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.245 [2024-07-25 01:28:55.628354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.245 [2024-07-25 01:28:55.628519] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.245 [2024-07-25 01:28:55.628527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.245 [2024-07-25 01:28:55.628533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.245 [2024-07-25 01:28:55.631267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.245 [2024-07-25 01:28:55.640411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.245 [2024-07-25 01:28:55.641087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.245 [2024-07-25 01:28:55.641104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.245 [2024-07-25 01:28:55.641111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.245 [2024-07-25 01:28:55.641274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.245 [2024-07-25 01:28:55.641437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.245 [2024-07-25 01:28:55.641445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.245 [2024-07-25 01:28:55.641452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.245 [2024-07-25 01:28:55.644187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.245 [2024-07-25 01:28:55.653426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.245 [2024-07-25 01:28:55.654113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.245 [2024-07-25 01:28:55.654131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.245 [2024-07-25 01:28:55.654138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.245 [2024-07-25 01:28:55.654316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.245 [2024-07-25 01:28:55.654478] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.245 [2024-07-25 01:28:55.654486] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.245 [2024-07-25 01:28:55.654493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.245 [2024-07-25 01:28:55.657229] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.245 [2024-07-25 01:28:55.666481] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.245 [2024-07-25 01:28:55.667081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.245 [2024-07-25 01:28:55.667098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.245 [2024-07-25 01:28:55.667105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.245 [2024-07-25 01:28:55.667281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.245 [2024-07-25 01:28:55.667453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.245 [2024-07-25 01:28:55.667462] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.245 [2024-07-25 01:28:55.667468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.245 [2024-07-25 01:28:55.670306] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.245 [2024-07-25 01:28:55.679631] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.246 [2024-07-25 01:28:55.680227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.246 [2024-07-25 01:28:55.680244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.246 [2024-07-25 01:28:55.680252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.246 [2024-07-25 01:28:55.680429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.246 [2024-07-25 01:28:55.680592] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.246 [2024-07-25 01:28:55.680601] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.246 [2024-07-25 01:28:55.680607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.246 [2024-07-25 01:28:55.683344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.246 [2024-07-25 01:28:55.692596] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.246 [2024-07-25 01:28:55.693287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.246 [2024-07-25 01:28:55.693305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.246 [2024-07-25 01:28:55.693312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.246 [2024-07-25 01:28:55.693484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.246 [2024-07-25 01:28:55.693656] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.246 [2024-07-25 01:28:55.693664] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.246 [2024-07-25 01:28:55.693670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.246 [2024-07-25 01:28:55.696420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.246 [2024-07-25 01:28:55.705656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.246 [2024-07-25 01:28:55.706361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.246 [2024-07-25 01:28:55.706378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.246 [2024-07-25 01:28:55.706385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.246 [2024-07-25 01:28:55.706548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.246 [2024-07-25 01:28:55.706711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.246 [2024-07-25 01:28:55.706719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.246 [2024-07-25 01:28:55.706729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.246 [2024-07-25 01:28:55.709479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.246 [2024-07-25 01:28:55.718730] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.246 [2024-07-25 01:28:55.719416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.246 [2024-07-25 01:28:55.719433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.246 [2024-07-25 01:28:55.719440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.246 [2024-07-25 01:28:55.719612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.246 [2024-07-25 01:28:55.719784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.246 [2024-07-25 01:28:55.719793] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.246 [2024-07-25 01:28:55.719799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.246 [2024-07-25 01:28:55.722549] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.246 [2024-07-25 01:28:55.731876] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.246 [2024-07-25 01:28:55.732597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.246 [2024-07-25 01:28:55.732614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.246 [2024-07-25 01:28:55.732622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.246 [2024-07-25 01:28:55.732800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.246 [2024-07-25 01:28:55.732978] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.246 [2024-07-25 01:28:55.732988] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.246 [2024-07-25 01:28:55.732995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.506 [2024-07-25 01:28:55.735855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.506 [2024-07-25 01:28:55.744926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.506 [2024-07-25 01:28:55.745618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.506 [2024-07-25 01:28:55.745635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.506 [2024-07-25 01:28:55.745643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.506 [2024-07-25 01:28:55.745814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.506 [2024-07-25 01:28:55.745987] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.506 [2024-07-25 01:28:55.745997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.506 [2024-07-25 01:28:55.746003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.506 [2024-07-25 01:28:55.748756] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.506 [2024-07-25 01:28:55.757997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.506 [2024-07-25 01:28:55.758666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.506 [2024-07-25 01:28:55.758686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.506 [2024-07-25 01:28:55.758694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.506 [2024-07-25 01:28:55.758866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.506 [2024-07-25 01:28:55.759038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.506 [2024-07-25 01:28:55.759053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.506 [2024-07-25 01:28:55.759059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.506 [2024-07-25 01:28:55.761822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.507 [2024-07-25 01:28:55.771070] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.507 [2024-07-25 01:28:55.771755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-07-25 01:28:55.771772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.507 [2024-07-25 01:28:55.771779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.507 [2024-07-25 01:28:55.771951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.507 [2024-07-25 01:28:55.772130] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.507 [2024-07-25 01:28:55.772139] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.507 [2024-07-25 01:28:55.772145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.507 [2024-07-25 01:28:55.774892] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.507 [2024-07-25 01:28:55.784144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.507 [2024-07-25 01:28:55.784813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-07-25 01:28:55.784830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.507 [2024-07-25 01:28:55.784837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.507 [2024-07-25 01:28:55.785008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.507 [2024-07-25 01:28:55.785185] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.507 [2024-07-25 01:28:55.785194] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.507 [2024-07-25 01:28:55.785200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.507 [2024-07-25 01:28:55.787943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.507 [2024-07-25 01:28:55.797191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.507 [2024-07-25 01:28:55.797814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-07-25 01:28:55.797830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.507 [2024-07-25 01:28:55.797837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.507 [2024-07-25 01:28:55.798009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.507 [2024-07-25 01:28:55.798191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.507 [2024-07-25 01:28:55.798202] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.507 [2024-07-25 01:28:55.798208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.507 [2024-07-25 01:28:55.800953] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.507 [2024-07-25 01:28:55.810188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.507 [2024-07-25 01:28:55.810860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-07-25 01:28:55.810876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.507 [2024-07-25 01:28:55.810884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.507 [2024-07-25 01:28:55.811061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.507 [2024-07-25 01:28:55.811234] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.507 [2024-07-25 01:28:55.811243] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.507 [2024-07-25 01:28:55.811250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.507 [2024-07-25 01:28:55.813994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.507 [2024-07-25 01:28:55.823248] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.507 [2024-07-25 01:28:55.823933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-07-25 01:28:55.823949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.507 [2024-07-25 01:28:55.823956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.507 [2024-07-25 01:28:55.824133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.507 [2024-07-25 01:28:55.824307] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.507 [2024-07-25 01:28:55.824315] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.507 [2024-07-25 01:28:55.824321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.507 [2024-07-25 01:28:55.827071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.507 [2024-07-25 01:28:55.836312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.507 [2024-07-25 01:28:55.837002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-07-25 01:28:55.837020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.507 [2024-07-25 01:28:55.837027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.507 [2024-07-25 01:28:55.837205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.507 [2024-07-25 01:28:55.837378] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.507 [2024-07-25 01:28:55.837388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.507 [2024-07-25 01:28:55.837395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.507 [2024-07-25 01:28:55.840141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.507 [2024-07-25 01:28:55.849385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.507 [2024-07-25 01:28:55.850074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-07-25 01:28:55.850090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.507 [2024-07-25 01:28:55.850098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.507 [2024-07-25 01:28:55.850270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.507 [2024-07-25 01:28:55.850442] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.507 [2024-07-25 01:28:55.850450] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.507 [2024-07-25 01:28:55.850456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.507 [2024-07-25 01:28:55.853205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.507 [2024-07-25 01:28:55.862446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.507 [2024-07-25 01:28:55.863132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-07-25 01:28:55.863148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.507 [2024-07-25 01:28:55.863156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.507 [2024-07-25 01:28:55.863332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.507 [2024-07-25 01:28:55.863496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.507 [2024-07-25 01:28:55.863504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.507 [2024-07-25 01:28:55.863510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.507 [2024-07-25 01:28:55.866245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.507 [2024-07-25 01:28:55.875528] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.507 [2024-07-25 01:28:55.876215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-07-25 01:28:55.876233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.507 [2024-07-25 01:28:55.876241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.507 [2024-07-25 01:28:55.876413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.507 [2024-07-25 01:28:55.876586] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.507 [2024-07-25 01:28:55.876595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.507 [2024-07-25 01:28:55.876601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.507 [2024-07-25 01:28:55.879354] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.507 [2024-07-25 01:28:55.888604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.507 [2024-07-25 01:28:55.889263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-07-25 01:28:55.889280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.507 [2024-07-25 01:28:55.889295] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.507 [2024-07-25 01:28:55.889468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.507 [2024-07-25 01:28:55.889641] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.507 [2024-07-25 01:28:55.889650] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.507 [2024-07-25 01:28:55.889656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.507 [2024-07-25 01:28:55.892406] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.507 [2024-07-25 01:28:55.901654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.507 [2024-07-25 01:28:55.902261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-07-25 01:28:55.902278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.507 [2024-07-25 01:28:55.902285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.507 [2024-07-25 01:28:55.902457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.507 [2024-07-25 01:28:55.902630] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.507 [2024-07-25 01:28:55.902639] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.507 [2024-07-25 01:28:55.902646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.507 [2024-07-25 01:28:55.905399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.507 [2024-07-25 01:28:55.914661] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.507 [2024-07-25 01:28:55.915347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-07-25 01:28:55.915365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.507 [2024-07-25 01:28:55.915372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.507 [2024-07-25 01:28:55.915545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.507 [2024-07-25 01:28:55.915718] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.507 [2024-07-25 01:28:55.915727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.507 [2024-07-25 01:28:55.915733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.507 [2024-07-25 01:28:55.918493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.507 [2024-07-25 01:28:55.927844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.507 [2024-07-25 01:28:55.928528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-07-25 01:28:55.928546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.507 [2024-07-25 01:28:55.928553] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.507 [2024-07-25 01:28:55.928731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.507 [2024-07-25 01:28:55.928909] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.507 [2024-07-25 01:28:55.928921] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.507 [2024-07-25 01:28:55.928928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.507 [2024-07-25 01:28:55.931704] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.507 [2024-07-25 01:28:55.940816] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.507 [2024-07-25 01:28:55.941424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-07-25 01:28:55.941441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.507 [2024-07-25 01:28:55.941449] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.507 [2024-07-25 01:28:55.941622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.507 [2024-07-25 01:28:55.941794] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.507 [2024-07-25 01:28:55.941803] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.507 [2024-07-25 01:28:55.941809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.507 [2024-07-25 01:28:55.944562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.507 [2024-07-25 01:28:55.953813] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.507 [2024-07-25 01:28:55.954495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-07-25 01:28:55.954512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.507 [2024-07-25 01:28:55.954520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.507 [2024-07-25 01:28:55.954691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.507 [2024-07-25 01:28:55.954864] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.507 [2024-07-25 01:28:55.954873] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.507 [2024-07-25 01:28:55.954879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.507 [2024-07-25 01:28:55.957629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.507 [2024-07-25 01:28:55.966867] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.507 [2024-07-25 01:28:55.967550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.507 [2024-07-25 01:28:55.967567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.507 [2024-07-25 01:28:55.967574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.507 [2024-07-25 01:28:55.967746] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.508 [2024-07-25 01:28:55.967918] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.508 [2024-07-25 01:28:55.967928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.508 [2024-07-25 01:28:55.967934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.508 [2024-07-25 01:28:55.970706] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.508 [2024-07-25 01:28:55.979956] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.508 [2024-07-25 01:28:55.980630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.508 [2024-07-25 01:28:55.980646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.508 [2024-07-25 01:28:55.980653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.508 [2024-07-25 01:28:55.980824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.508 [2024-07-25 01:28:55.980996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.508 [2024-07-25 01:28:55.981004] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.508 [2024-07-25 01:28:55.981011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.508 [2024-07-25 01:28:55.983763] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.508 [2024-07-25 01:28:55.993020] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.508 [2024-07-25 01:28:55.993719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.508 [2024-07-25 01:28:55.993736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.508 [2024-07-25 01:28:55.993743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.508 [2024-07-25 01:28:55.993920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.508 [2024-07-25 01:28:55.994105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.508 [2024-07-25 01:28:55.994115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.508 [2024-07-25 01:28:55.994122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.508 [2024-07-25 01:28:55.996957] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.768 [2024-07-25 01:28:56.006162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.768 [2024-07-25 01:28:56.006841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.768 [2024-07-25 01:28:56.006857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.768 [2024-07-25 01:28:56.006864] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.768 [2024-07-25 01:28:56.007035] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.768 [2024-07-25 01:28:56.007243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.768 [2024-07-25 01:28:56.007252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.768 [2024-07-25 01:28:56.007259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.768 [2024-07-25 01:28:56.010005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.768 [2024-07-25 01:28:56.019252] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.768 [2024-07-25 01:28:56.019943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.768 [2024-07-25 01:28:56.019960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.768 [2024-07-25 01:28:56.019967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.768 [2024-07-25 01:28:56.020147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.768 [2024-07-25 01:28:56.020319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.768 [2024-07-25 01:28:56.020327] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.768 [2024-07-25 01:28:56.020334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.769 [2024-07-25 01:28:56.023084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.769 [2024-07-25 01:28:56.032321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.769 [2024-07-25 01:28:56.033007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.769 [2024-07-25 01:28:56.033024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.769 [2024-07-25 01:28:56.033031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.769 [2024-07-25 01:28:56.033208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.769 [2024-07-25 01:28:56.033380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.769 [2024-07-25 01:28:56.033388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.769 [2024-07-25 01:28:56.033395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.769 [2024-07-25 01:28:56.036135] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.769 [2024-07-25 01:28:56.045367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.769 [2024-07-25 01:28:56.046033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.769 [2024-07-25 01:28:56.046055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.769 [2024-07-25 01:28:56.046063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.769 [2024-07-25 01:28:56.046233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.769 [2024-07-25 01:28:56.046406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.769 [2024-07-25 01:28:56.046416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.769 [2024-07-25 01:28:56.046422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.769 [2024-07-25 01:28:56.049177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.769 [2024-07-25 01:28:56.058434] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.769 [2024-07-25 01:28:56.059115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.769 [2024-07-25 01:28:56.059133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.769 [2024-07-25 01:28:56.059140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.769 [2024-07-25 01:28:56.059312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.769 [2024-07-25 01:28:56.059484] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.769 [2024-07-25 01:28:56.059494] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.769 [2024-07-25 01:28:56.059504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.769 [2024-07-25 01:28:56.062256] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.769 [2024-07-25 01:28:56.071571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.769 [2024-07-25 01:28:56.072272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.769 [2024-07-25 01:28:56.072289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.769 [2024-07-25 01:28:56.072296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.769 [2024-07-25 01:28:56.072458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.769 [2024-07-25 01:28:56.072621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.769 [2024-07-25 01:28:56.072630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.769 [2024-07-25 01:28:56.072638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.769 [2024-07-25 01:28:56.075384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.769 [2024-07-25 01:28:56.084641] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.769 [2024-07-25 01:28:56.085324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.769 [2024-07-25 01:28:56.085340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.769 [2024-07-25 01:28:56.085347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.769 [2024-07-25 01:28:56.085510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.769 [2024-07-25 01:28:56.085673] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.769 [2024-07-25 01:28:56.085682] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.769 [2024-07-25 01:28:56.085688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.769 [2024-07-25 01:28:56.088439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.769 [2024-07-25 01:28:56.097681] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.769 [2024-07-25 01:28:56.098371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.769 [2024-07-25 01:28:56.098388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.769 [2024-07-25 01:28:56.098394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.769 [2024-07-25 01:28:56.098558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.769 [2024-07-25 01:28:56.098722] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.769 [2024-07-25 01:28:56.098730] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.769 [2024-07-25 01:28:56.098737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.769 [2024-07-25 01:28:56.101484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.769 [2024-07-25 01:28:56.110795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.769 [2024-07-25 01:28:56.111514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.769 [2024-07-25 01:28:56.111530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.769 [2024-07-25 01:28:56.111538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.769 [2024-07-25 01:28:56.111715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.769 [2024-07-25 01:28:56.111894] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.769 [2024-07-25 01:28:56.111903] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.769 [2024-07-25 01:28:56.111910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.769 [2024-07-25 01:28:56.114748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.769 [2024-07-25 01:28:56.123811] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.769 [2024-07-25 01:28:56.124498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.769 [2024-07-25 01:28:56.124515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.769 [2024-07-25 01:28:56.124523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.769 [2024-07-25 01:28:56.124686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.769 [2024-07-25 01:28:56.124849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.769 [2024-07-25 01:28:56.124858] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.769 [2024-07-25 01:28:56.124863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.769 [2024-07-25 01:28:56.127632] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.769 [2024-07-25 01:28:56.136900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.769 [2024-07-25 01:28:56.137556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.769 [2024-07-25 01:28:56.137573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.769 [2024-07-25 01:28:56.137580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.769 [2024-07-25 01:28:56.137753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.769 [2024-07-25 01:28:56.137925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.769 [2024-07-25 01:28:56.137935] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.769 [2024-07-25 01:28:56.137941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.769 [2024-07-25 01:28:56.140693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.769 [2024-07-25 01:28:56.149957] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.769 [2024-07-25 01:28:56.150651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.769 [2024-07-25 01:28:56.150668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.770 [2024-07-25 01:28:56.150675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.770 [2024-07-25 01:28:56.150850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.770 [2024-07-25 01:28:56.151024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.770 [2024-07-25 01:28:56.151033] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.770 [2024-07-25 01:28:56.151039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.770 [2024-07-25 01:28:56.153787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.770 [2024-07-25 01:28:56.163080] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.770 [2024-07-25 01:28:56.163686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.770 [2024-07-25 01:28:56.163703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.770 [2024-07-25 01:28:56.163711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.770 [2024-07-25 01:28:56.163883] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.770 [2024-07-25 01:28:56.164061] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.770 [2024-07-25 01:28:56.164071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.770 [2024-07-25 01:28:56.164078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.770 [2024-07-25 01:28:56.166819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.770 [2024-07-25 01:28:56.176116] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.770 [2024-07-25 01:28:56.176714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.770 [2024-07-25 01:28:56.176731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.770 [2024-07-25 01:28:56.176738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.770 [2024-07-25 01:28:56.176910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.770 [2024-07-25 01:28:56.177108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.770 [2024-07-25 01:28:56.177120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.770 [2024-07-25 01:28:56.177126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.770 [2024-07-25 01:28:56.179951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.770 [2024-07-25 01:28:56.189093] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.770 [2024-07-25 01:28:56.189740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.770 [2024-07-25 01:28:56.189757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.770 [2024-07-25 01:28:56.189764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.770 [2024-07-25 01:28:56.189935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.770 [2024-07-25 01:28:56.190114] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.770 [2024-07-25 01:28:56.190124] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.770 [2024-07-25 01:28:56.190135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.770 [2024-07-25 01:28:56.192885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.770 [2024-07-25 01:28:56.202118] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.770 [2024-07-25 01:28:56.202731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.770 [2024-07-25 01:28:56.202748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.770 [2024-07-25 01:28:56.202756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.770 [2024-07-25 01:28:56.202928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.770 [2024-07-25 01:28:56.203107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.770 [2024-07-25 01:28:56.203117] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.770 [2024-07-25 01:28:56.203123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.770 [2024-07-25 01:28:56.205866] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.770 [2024-07-25 01:28:56.215122] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.770 [2024-07-25 01:28:56.215761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.770 [2024-07-25 01:28:56.215779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.770 [2024-07-25 01:28:56.215786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.770 [2024-07-25 01:28:56.215958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.770 [2024-07-25 01:28:56.216146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.770 [2024-07-25 01:28:56.216156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.770 [2024-07-25 01:28:56.216162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.770 [2024-07-25 01:28:56.218908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.770 [2024-07-25 01:28:56.228162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.770 [2024-07-25 01:28:56.228839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.770 [2024-07-25 01:28:56.228856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.770 [2024-07-25 01:28:56.228864] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.770 [2024-07-25 01:28:56.229036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.770 [2024-07-25 01:28:56.229214] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.770 [2024-07-25 01:28:56.229224] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.770 [2024-07-25 01:28:56.229230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.770 [2024-07-25 01:28:56.231979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.770 [2024-07-25 01:28:56.241250] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.770 [2024-07-25 01:28:56.241843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.770 [2024-07-25 01:28:56.241864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.770 [2024-07-25 01:28:56.241871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.770 [2024-07-25 01:28:56.242048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.770 [2024-07-25 01:28:56.242222] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.770 [2024-07-25 01:28:56.242231] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.770 [2024-07-25 01:28:56.242237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.770 [2024-07-25 01:28:56.244984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.770 [2024-07-25 01:28:56.254289] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.770 [2024-07-25 01:28:56.255106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.770 [2024-07-25 01:28:56.255124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:33.770 [2024-07-25 01:28:56.255131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:33.770 [2024-07-25 01:28:56.255316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:33.770 [2024-07-25 01:28:56.255490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.770 [2024-07-25 01:28:56.255498] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.770 [2024-07-25 01:28:56.255505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.770 [2024-07-25 01:28:56.258343] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.033 [2024-07-25 01:28:56.267338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.033 [2024-07-25 01:28:56.268030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.033 [2024-07-25 01:28:56.268052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.033 [2024-07-25 01:28:56.268061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.033 [2024-07-25 01:28:56.268233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.033 [2024-07-25 01:28:56.268406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.033 [2024-07-25 01:28:56.268415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.033 [2024-07-25 01:28:56.268421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.033 [2024-07-25 01:28:56.271211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.033 [2024-07-25 01:28:56.280314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.033 [2024-07-25 01:28:56.280911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.033 [2024-07-25 01:28:56.280928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.033 [2024-07-25 01:28:56.280934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.033 [2024-07-25 01:28:56.281112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.033 [2024-07-25 01:28:56.281289] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.033 [2024-07-25 01:28:56.281299] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.033 [2024-07-25 01:28:56.281305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.033 [2024-07-25 01:28:56.284059] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.033 [2024-07-25 01:28:56.293318] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.033 [2024-07-25 01:28:56.293980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.033 [2024-07-25 01:28:56.293997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.033 [2024-07-25 01:28:56.294004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.033 [2024-07-25 01:28:56.294182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.033 [2024-07-25 01:28:56.294355] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.033 [2024-07-25 01:28:56.294365] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.033 [2024-07-25 01:28:56.294371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.033 [2024-07-25 01:28:56.297120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.033 [2024-07-25 01:28:56.306382] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.033 [2024-07-25 01:28:56.307056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.033 [2024-07-25 01:28:56.307073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.033 [2024-07-25 01:28:56.307080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.033 [2024-07-25 01:28:56.307252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.033 [2024-07-25 01:28:56.307425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.033 [2024-07-25 01:28:56.307434] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.033 [2024-07-25 01:28:56.307440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.033 [2024-07-25 01:28:56.310193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.033 [2024-07-25 01:28:56.319457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.033 [2024-07-25 01:28:56.320138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.033 [2024-07-25 01:28:56.320155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.033 [2024-07-25 01:28:56.320163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.033 [2024-07-25 01:28:56.320339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.033 [2024-07-25 01:28:56.320503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.033 [2024-07-25 01:28:56.320511] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.033 [2024-07-25 01:28:56.320517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.033 [2024-07-25 01:28:56.323263] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.033 [2024-07-25 01:28:56.332527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.033 [2024-07-25 01:28:56.333216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.033 [2024-07-25 01:28:56.333233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.033 [2024-07-25 01:28:56.333240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.033 [2024-07-25 01:28:56.333403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.033 [2024-07-25 01:28:56.333567] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.033 [2024-07-25 01:28:56.333576] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.033 [2024-07-25 01:28:56.333582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.033 [2024-07-25 01:28:56.336321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.033 [2024-07-25 01:28:56.345577] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.034 [2024-07-25 01:28:56.346277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.034 [2024-07-25 01:28:56.346293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.034 [2024-07-25 01:28:56.346300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.034 [2024-07-25 01:28:56.346463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.034 [2024-07-25 01:28:56.346626] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.034 [2024-07-25 01:28:56.346636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.034 [2024-07-25 01:28:56.346642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.034 [2024-07-25 01:28:56.349386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.034 [2024-07-25 01:28:56.358648] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.034 [2024-07-25 01:28:56.359301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.034 [2024-07-25 01:28:56.359319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.034 [2024-07-25 01:28:56.359326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.034 [2024-07-25 01:28:56.359498] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.034 [2024-07-25 01:28:56.359671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.034 [2024-07-25 01:28:56.359680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.034 [2024-07-25 01:28:56.359686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.034 [2024-07-25 01:28:56.362467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.034 [2024-07-25 01:28:56.371750] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1051929 Killed "${NVMF_APP[@]}" "$@" 00:28:34.034 [2024-07-25 01:28:56.372399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.034 [2024-07-25 01:28:56.372421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.034 [2024-07-25 01:28:56.372428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.034 [2024-07-25 01:28:56.372605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.034 01:28:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:34.034 [2024-07-25 01:28:56.372782] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.034 [2024-07-25 01:28:56.372792] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.034 [2024-07-25 01:28:56.372799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.034 01:28:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:34.034 01:28:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:34.034 01:28:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:34.034 01:28:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:34.034 [2024-07-25 01:28:56.375634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.034 01:28:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1053333 00:28:34.034 01:28:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1053333 00:28:34.034 01:28:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:34.034 01:28:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1053333 ']' 00:28:34.034 01:28:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:34.034 01:28:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:34.034 01:28:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:34.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:34.034 01:28:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:34.034 01:28:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:34.034 [2024-07-25 01:28:56.384839] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.034 [2024-07-25 01:28:56.385554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.034 [2024-07-25 01:28:56.385575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.034 [2024-07-25 01:28:56.385584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.034 [2024-07-25 01:28:56.385763] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.034 [2024-07-25 01:28:56.385942] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.034 [2024-07-25 01:28:56.385952] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.034 [2024-07-25 01:28:56.385959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.034 [2024-07-25 01:28:56.388795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.034 [2024-07-25 01:28:56.398008] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.034 [2024-07-25 01:28:56.398625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.034 [2024-07-25 01:28:56.398642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.034 [2024-07-25 01:28:56.398649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.034 [2024-07-25 01:28:56.398829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.034 [2024-07-25 01:28:56.399007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.034 [2024-07-25 01:28:56.399016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.034 [2024-07-25 01:28:56.399023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.034 [2024-07-25 01:28:56.401858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.034 [2024-07-25 01:28:56.411024] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.034 [2024-07-25 01:28:56.411643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.034 [2024-07-25 01:28:56.411660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.034 [2024-07-25 01:28:56.411667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.034 [2024-07-25 01:28:56.411844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.034 [2024-07-25 01:28:56.412022] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.034 [2024-07-25 01:28:56.412031] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.034 [2024-07-25 01:28:56.412038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.034 [2024-07-25 01:28:56.414810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.034 [2024-07-25 01:28:56.424173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.034 [2024-07-25 01:28:56.424745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.034 [2024-07-25 01:28:56.424762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.034 [2024-07-25 01:28:56.424770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.034 [2024-07-25 01:28:56.424947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.034 [2024-07-25 01:28:56.425133] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.034 [2024-07-25 01:28:56.425143] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.034 [2024-07-25 01:28:56.425150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.034 [2024-07-25 01:28:56.427958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.034 [2024-07-25 01:28:56.428809] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:28:34.034 [2024-07-25 01:28:56.428850] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:34.034 [2024-07-25 01:28:56.437357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.034 [2024-07-25 01:28:56.437903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.034 [2024-07-25 01:28:56.437921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.034 [2024-07-25 01:28:56.437929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.034 [2024-07-25 01:28:56.438125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.034 [2024-07-25 01:28:56.438299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.034 [2024-07-25 01:28:56.438308] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.034 [2024-07-25 01:28:56.438315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.034 [2024-07-25 01:28:56.441119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.034 [2024-07-25 01:28:56.450382] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.034 [2024-07-25 01:28:56.451012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.035 [2024-07-25 01:28:56.451029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.035 [2024-07-25 01:28:56.451036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.035 [2024-07-25 01:28:56.451236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.035 [2024-07-25 01:28:56.451415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.035 [2024-07-25 01:28:56.451424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.035 [2024-07-25 01:28:56.451431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.035 [2024-07-25 01:28:56.454223] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.035 EAL: No free 2048 kB hugepages reported on node 1 00:28:34.035 [2024-07-25 01:28:56.463504] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.035 [2024-07-25 01:28:56.464109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.035 [2024-07-25 01:28:56.464127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.035 [2024-07-25 01:28:56.464135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.035 [2024-07-25 01:28:56.464312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.035 [2024-07-25 01:28:56.464490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.035 [2024-07-25 01:28:56.464500] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.035 [2024-07-25 01:28:56.464507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.035 [2024-07-25 01:28:56.467297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.035 [2024-07-25 01:28:56.476614] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.035 [2024-07-25 01:28:56.477232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.035 [2024-07-25 01:28:56.477249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.035 [2024-07-25 01:28:56.477257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.035 [2024-07-25 01:28:56.477440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.035 [2024-07-25 01:28:56.477614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.035 [2024-07-25 01:28:56.477623] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.035 [2024-07-25 01:28:56.477634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.035 [2024-07-25 01:28:56.480411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.035 [2024-07-25 01:28:56.487006] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:34.035 [2024-07-25 01:28:56.489715] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.035 [2024-07-25 01:28:56.490391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.035 [2024-07-25 01:28:56.490407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.035 [2024-07-25 01:28:56.490415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.035 [2024-07-25 01:28:56.490577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.035 [2024-07-25 01:28:56.490740] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.035 [2024-07-25 01:28:56.490750] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.035 [2024-07-25 01:28:56.490757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.035 [2024-07-25 01:28:56.493516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.035 [2024-07-25 01:28:56.502774] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.035 [2024-07-25 01:28:56.503464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.035 [2024-07-25 01:28:56.503481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.035 [2024-07-25 01:28:56.503488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.035 [2024-07-25 01:28:56.503650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.035 [2024-07-25 01:28:56.503814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.035 [2024-07-25 01:28:56.503823] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.035 [2024-07-25 01:28:56.503830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.035 [2024-07-25 01:28:56.506629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.035 [2024-07-25 01:28:56.515895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.035 [2024-07-25 01:28:56.516586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.035 [2024-07-25 01:28:56.516605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.035 [2024-07-25 01:28:56.516613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.035 [2024-07-25 01:28:56.516787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.035 [2024-07-25 01:28:56.516960] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.035 [2024-07-25 01:28:56.516969] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.035 [2024-07-25 01:28:56.516977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.035 [2024-07-25 01:28:56.519815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.296 [2024-07-25 01:28:56.529011] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.296 [2024-07-25 01:28:56.529695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.296 [2024-07-25 01:28:56.529716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.296 [2024-07-25 01:28:56.529725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.296 [2024-07-25 01:28:56.529905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.296 [2024-07-25 01:28:56.530090] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.296 [2024-07-25 01:28:56.530100] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.296 [2024-07-25 01:28:56.530109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.296 [2024-07-25 01:28:56.532904] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.296 [2024-07-25 01:28:56.542006] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.296 [2024-07-25 01:28:56.542435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.296 [2024-07-25 01:28:56.542453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.296 [2024-07-25 01:28:56.542461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.296 [2024-07-25 01:28:56.542634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.296 [2024-07-25 01:28:56.542809] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.296 [2024-07-25 01:28:56.542820] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.296 [2024-07-25 01:28:56.542827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.296 [2024-07-25 01:28:56.545580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.296 [2024-07-25 01:28:56.555033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.296 [2024-07-25 01:28:56.555731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.296 [2024-07-25 01:28:56.555748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.296 [2024-07-25 01:28:56.555757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.296 [2024-07-25 01:28:56.555930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.296 [2024-07-25 01:28:56.556126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.296 [2024-07-25 01:28:56.556136] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.296 [2024-07-25 01:28:56.556144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.296 [2024-07-25 01:28:56.558938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.296 [2024-07-25 01:28:56.562219] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:34.296 [2024-07-25 01:28:56.562247] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:34.296 [2024-07-25 01:28:56.562254] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:34.296 [2024-07-25 01:28:56.562260] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:34.296 [2024-07-25 01:28:56.562267] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:34.296 [2024-07-25 01:28:56.562327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:34.296 [2024-07-25 01:28:56.562411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:34.296 [2024-07-25 01:28:56.562412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:34.296 [2024-07-25 01:28:56.568127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.296 [2024-07-25 01:28:56.568772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.296 [2024-07-25 01:28:56.568791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.296 [2024-07-25 01:28:56.568799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.296 [2024-07-25 01:28:56.568978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.296 [2024-07-25 01:28:56.569162] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.296 [2024-07-25 01:28:56.569172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.296 [2024-07-25 01:28:56.569179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.296 [2024-07-25 01:28:56.572016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.296 [2024-07-25 01:28:56.581252] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.296 [2024-07-25 01:28:56.581965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.296 [2024-07-25 01:28:56.581984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.297 [2024-07-25 01:28:56.581992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.297 [2024-07-25 01:28:56.582176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.297 [2024-07-25 01:28:56.582356] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.297 [2024-07-25 01:28:56.582365] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.297 [2024-07-25 01:28:56.582372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.297 [2024-07-25 01:28:56.585203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.297 [2024-07-25 01:28:56.594448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.297 [2024-07-25 01:28:56.595165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.297 [2024-07-25 01:28:56.595185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.297 [2024-07-25 01:28:56.595194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.297 [2024-07-25 01:28:56.595373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.297 [2024-07-25 01:28:56.595554] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.297 [2024-07-25 01:28:56.595564] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.297 [2024-07-25 01:28:56.595571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.297 [2024-07-25 01:28:56.598405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.297 [2024-07-25 01:28:56.607600] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.297 [2024-07-25 01:28:56.608313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.297 [2024-07-25 01:28:56.608332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.297 [2024-07-25 01:28:56.608341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.297 [2024-07-25 01:28:56.608519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.297 [2024-07-25 01:28:56.608698] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.297 [2024-07-25 01:28:56.608707] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.297 [2024-07-25 01:28:56.608714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.297 [2024-07-25 01:28:56.611551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.297 [2024-07-25 01:28:56.620751] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.297 [2024-07-25 01:28:56.621464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.297 [2024-07-25 01:28:56.621484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.297 [2024-07-25 01:28:56.621493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.297 [2024-07-25 01:28:56.621670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.297 [2024-07-25 01:28:56.621849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.297 [2024-07-25 01:28:56.621858] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.297 [2024-07-25 01:28:56.621866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.297 [2024-07-25 01:28:56.624700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.297 [2024-07-25 01:28:56.633900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.297 [2024-07-25 01:28:56.634582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.297 [2024-07-25 01:28:56.634600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.297 [2024-07-25 01:28:56.634607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.297 [2024-07-25 01:28:56.634784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.297 [2024-07-25 01:28:56.634963] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.297 [2024-07-25 01:28:56.634972] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.297 [2024-07-25 01:28:56.634979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.297 [2024-07-25 01:28:56.638011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.297 [2024-07-25 01:28:56.647046] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.297 [2024-07-25 01:28:56.647751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.297 [2024-07-25 01:28:56.647769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.297 [2024-07-25 01:28:56.647776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.297 [2024-07-25 01:28:56.647958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.297 [2024-07-25 01:28:56.648142] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.297 [2024-07-25 01:28:56.648151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.297 [2024-07-25 01:28:56.648158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.297 [2024-07-25 01:28:56.650985] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.297 [2024-07-25 01:28:56.660177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.297 [2024-07-25 01:28:56.660851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.297 [2024-07-25 01:28:56.660869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.297 [2024-07-25 01:28:56.660876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.297 [2024-07-25 01:28:56.661059] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.297 [2024-07-25 01:28:56.661237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.297 [2024-07-25 01:28:56.661246] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.297 [2024-07-25 01:28:56.661252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.297 [2024-07-25 01:28:56.664082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.297 [2024-07-25 01:28:56.673280] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.297 [2024-07-25 01:28:56.673951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.297 [2024-07-25 01:28:56.673968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.297 [2024-07-25 01:28:56.673976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.297 [2024-07-25 01:28:56.674158] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.297 [2024-07-25 01:28:56.674336] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.297 [2024-07-25 01:28:56.674345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.297 [2024-07-25 01:28:56.674352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.297 [2024-07-25 01:28:56.677182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.297 [2024-07-25 01:28:56.686374] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.297 [2024-07-25 01:28:56.687115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.297 [2024-07-25 01:28:56.687133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.297 [2024-07-25 01:28:56.687140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.297 [2024-07-25 01:28:56.687317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.297 [2024-07-25 01:28:56.687494] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.297 [2024-07-25 01:28:56.687504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.297 [2024-07-25 01:28:56.687514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.297 [2024-07-25 01:28:56.690349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.297 [2024-07-25 01:28:56.699554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.297 [2024-07-25 01:28:56.700175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.297 [2024-07-25 01:28:56.700195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.297 [2024-07-25 01:28:56.700203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.297 [2024-07-25 01:28:56.700379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.297 [2024-07-25 01:28:56.700558] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.297 [2024-07-25 01:28:56.700568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.297 [2024-07-25 01:28:56.700574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.297 [2024-07-25 01:28:56.703408] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.297 [2024-07-25 01:28:56.712609] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.298 [2024-07-25 01:28:56.713222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.298 [2024-07-25 01:28:56.713239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.298 [2024-07-25 01:28:56.713246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.298 [2024-07-25 01:28:56.713423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.298 [2024-07-25 01:28:56.713601] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.298 [2024-07-25 01:28:56.713617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.298 [2024-07-25 01:28:56.713624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.298 [2024-07-25 01:28:56.716463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.298 [2024-07-25 01:28:56.725657] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.298 [2024-07-25 01:28:56.726357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.298 [2024-07-25 01:28:56.726375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.298 [2024-07-25 01:28:56.726383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.298 [2024-07-25 01:28:56.726563] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.298 [2024-07-25 01:28:56.726741] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.298 [2024-07-25 01:28:56.726751] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.298 [2024-07-25 01:28:56.726758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.298 [2024-07-25 01:28:56.729590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.298 [2024-07-25 01:28:56.738777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.298 [2024-07-25 01:28:56.739409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.298 [2024-07-25 01:28:56.739426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.298 [2024-07-25 01:28:56.739433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.298 [2024-07-25 01:28:56.739610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.298 [2024-07-25 01:28:56.739787] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.298 [2024-07-25 01:28:56.739796] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.298 [2024-07-25 01:28:56.739803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.298 [2024-07-25 01:28:56.742636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.298 [2024-07-25 01:28:56.751836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.298 [2024-07-25 01:28:56.752538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.298 [2024-07-25 01:28:56.752555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.298 [2024-07-25 01:28:56.752562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.298 [2024-07-25 01:28:56.752739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.298 [2024-07-25 01:28:56.752916] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.298 [2024-07-25 01:28:56.752925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.298 [2024-07-25 01:28:56.752932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.298 [2024-07-25 01:28:56.755764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.298 [2024-07-25 01:28:56.764961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.298 [2024-07-25 01:28:56.765533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.298 [2024-07-25 01:28:56.765550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.298 [2024-07-25 01:28:56.765557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.298 [2024-07-25 01:28:56.765734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.298 [2024-07-25 01:28:56.765911] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.298 [2024-07-25 01:28:56.765920] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.298 [2024-07-25 01:28:56.765927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.298 [2024-07-25 01:28:56.768755] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.298 [2024-07-25 01:28:56.778114] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.298 [2024-07-25 01:28:56.778798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.298 [2024-07-25 01:28:56.778814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.298 [2024-07-25 01:28:56.778822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.298 [2024-07-25 01:28:56.778999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.298 [2024-07-25 01:28:56.779185] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.298 [2024-07-25 01:28:56.779195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.298 [2024-07-25 01:28:56.779201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.298 [2024-07-25 01:28:56.782031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.559 [2024-07-25 01:28:56.791233] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.559 [2024-07-25 01:28:56.791902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.559 [2024-07-25 01:28:56.791918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.559 [2024-07-25 01:28:56.791926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.559 [2024-07-25 01:28:56.792108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.559 [2024-07-25 01:28:56.792286] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.559 [2024-07-25 01:28:56.792294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.559 [2024-07-25 01:28:56.792301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.559 [2024-07-25 01:28:56.795134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.559 [2024-07-25 01:28:56.804359] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.559 [2024-07-25 01:28:56.805057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.559 [2024-07-25 01:28:56.805074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.559 [2024-07-25 01:28:56.805081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.559 [2024-07-25 01:28:56.805258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.559 [2024-07-25 01:28:56.805435] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.559 [2024-07-25 01:28:56.805443] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.559 [2024-07-25 01:28:56.805449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.559 [2024-07-25 01:28:56.808281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.559 [2024-07-25 01:28:56.817488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.559 [2024-07-25 01:28:56.818190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.559 [2024-07-25 01:28:56.818207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.559 [2024-07-25 01:28:56.818214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.559 [2024-07-25 01:28:56.818391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.559 [2024-07-25 01:28:56.818569] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.559 [2024-07-25 01:28:56.818577] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.559 [2024-07-25 01:28:56.818584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.559 [2024-07-25 01:28:56.821417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.559 [2024-07-25 01:28:56.830608] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.559 [2024-07-25 01:28:56.831305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.559 [2024-07-25 01:28:56.831322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.559 [2024-07-25 01:28:56.831329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.559 [2024-07-25 01:28:56.831507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.559 [2024-07-25 01:28:56.831684] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.559 [2024-07-25 01:28:56.831693] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.559 [2024-07-25 01:28:56.831699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.559 [2024-07-25 01:28:56.834528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.559 [2024-07-25 01:28:56.843721] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.559 [2024-07-25 01:28:56.844441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.559 [2024-07-25 01:28:56.844459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.559 [2024-07-25 01:28:56.844466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.559 [2024-07-25 01:28:56.844642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.559 [2024-07-25 01:28:56.844820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.559 [2024-07-25 01:28:56.844829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.559 [2024-07-25 01:28:56.844836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.559 [2024-07-25 01:28:56.847673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.559 [2024-07-25 01:28:56.856872] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.559 [2024-07-25 01:28:56.857572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.559 [2024-07-25 01:28:56.857590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.559 [2024-07-25 01:28:56.857598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.559 [2024-07-25 01:28:56.857774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.559 [2024-07-25 01:28:56.857953] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.559 [2024-07-25 01:28:56.857962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.559 [2024-07-25 01:28:56.857969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.559 [2024-07-25 01:28:56.860803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.559 [2024-07-25 01:28:56.870001] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.559 [2024-07-25 01:28:56.870677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.559 [2024-07-25 01:28:56.870694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.559 [2024-07-25 01:28:56.870704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.559 [2024-07-25 01:28:56.870881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.559 [2024-07-25 01:28:56.871063] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.559 [2024-07-25 01:28:56.871072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.559 [2024-07-25 01:28:56.871079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.559 [2024-07-25 01:28:56.873907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.559 [2024-07-25 01:28:56.883099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.559 [2024-07-25 01:28:56.883768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.559 [2024-07-25 01:28:56.883786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.559 [2024-07-25 01:28:56.883793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.559 [2024-07-25 01:28:56.883970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.559 [2024-07-25 01:28:56.884154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.559 [2024-07-25 01:28:56.884164] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.559 [2024-07-25 01:28:56.884170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.559 [2024-07-25 01:28:56.886996] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.559 [2024-07-25 01:28:56.896189] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.559 [2024-07-25 01:28:56.896879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.559 [2024-07-25 01:28:56.896895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.560 [2024-07-25 01:28:56.896904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.560 [2024-07-25 01:28:56.897130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.560 [2024-07-25 01:28:56.897311] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.560 [2024-07-25 01:28:56.897321] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.560 [2024-07-25 01:28:56.897327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.560 [2024-07-25 01:28:56.900165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.560 [2024-07-25 01:28:56.909369] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.560 [2024-07-25 01:28:56.909994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.560 [2024-07-25 01:28:56.910010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.560 [2024-07-25 01:28:56.910017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.560 [2024-07-25 01:28:56.910199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.560 [2024-07-25 01:28:56.910380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.560 [2024-07-25 01:28:56.910389] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.560 [2024-07-25 01:28:56.910396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.560 [2024-07-25 01:28:56.913228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.560 [2024-07-25 01:28:56.922440] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.560 [2024-07-25 01:28:56.923123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.560 [2024-07-25 01:28:56.923141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.560 [2024-07-25 01:28:56.923149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.560 [2024-07-25 01:28:56.923326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.560 [2024-07-25 01:28:56.923504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.560 [2024-07-25 01:28:56.923514] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.560 [2024-07-25 01:28:56.923523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.560 [2024-07-25 01:28:56.926360] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.560 [2024-07-25 01:28:56.935562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.560 [2024-07-25 01:28:56.936255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.560 [2024-07-25 01:28:56.936273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.560 [2024-07-25 01:28:56.936281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.560 [2024-07-25 01:28:56.936458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.560 [2024-07-25 01:28:56.936636] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.560 [2024-07-25 01:28:56.936645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.560 [2024-07-25 01:28:56.936652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.560 [2024-07-25 01:28:56.939483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.560 [2024-07-25 01:28:56.948679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.560 [2024-07-25 01:28:56.949358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.560 [2024-07-25 01:28:56.949375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.560 [2024-07-25 01:28:56.949384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.560 [2024-07-25 01:28:56.949561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.560 [2024-07-25 01:28:56.949738] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.560 [2024-07-25 01:28:56.949748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.560 [2024-07-25 01:28:56.949754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.560 [2024-07-25 01:28:56.952588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.560 [2024-07-25 01:28:56.961820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.560 [2024-07-25 01:28:56.962475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.560 [2024-07-25 01:28:56.962492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.560 [2024-07-25 01:28:56.962500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.560 [2024-07-25 01:28:56.962676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.560 [2024-07-25 01:28:56.962855] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.560 [2024-07-25 01:28:56.962864] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.560 [2024-07-25 01:28:56.962870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.560 [2024-07-25 01:28:56.965702] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.560 [2024-07-25 01:28:56.974896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.560 [2024-07-25 01:28:56.975599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.560 [2024-07-25 01:28:56.975616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.560 [2024-07-25 01:28:56.975623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.560 [2024-07-25 01:28:56.975800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.560 [2024-07-25 01:28:56.975977] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.560 [2024-07-25 01:28:56.975985] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.560 [2024-07-25 01:28:56.975991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.560 [2024-07-25 01:28:56.978826] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.560 [2024-07-25 01:28:56.988029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.560 [2024-07-25 01:28:56.988727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.560 [2024-07-25 01:28:56.988744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.560 [2024-07-25 01:28:56.988751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.560 [2024-07-25 01:28:56.988927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.560 [2024-07-25 01:28:56.989108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.560 [2024-07-25 01:28:56.989117] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.560 [2024-07-25 01:28:56.989124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.560 [2024-07-25 01:28:56.991954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.560 [2024-07-25 01:28:57.001158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.560 [2024-07-25 01:28:57.001855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.560 [2024-07-25 01:28:57.001871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.560 [2024-07-25 01:28:57.001881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.560 [2024-07-25 01:28:57.002062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.560 [2024-07-25 01:28:57.002241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.560 [2024-07-25 01:28:57.002250] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.560 [2024-07-25 01:28:57.002256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.560 [2024-07-25 01:28:57.005085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.560 [2024-07-25 01:28:57.014306] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.560 [2024-07-25 01:28:57.014999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.560 [2024-07-25 01:28:57.015015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.560 [2024-07-25 01:28:57.015022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.560 [2024-07-25 01:28:57.015203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.560 [2024-07-25 01:28:57.015382] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.561 [2024-07-25 01:28:57.015391] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.561 [2024-07-25 01:28:57.015398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.561 [2024-07-25 01:28:57.018237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.561 [2024-07-25 01:28:57.027431] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.561 [2024-07-25 01:28:57.028110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.561 [2024-07-25 01:28:57.028128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.561 [2024-07-25 01:28:57.028135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.561 [2024-07-25 01:28:57.028312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.561 [2024-07-25 01:28:57.028489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.561 [2024-07-25 01:28:57.028499] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.561 [2024-07-25 01:28:57.028505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.561 [2024-07-25 01:28:57.031340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.561 [2024-07-25 01:28:57.040531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.561 [2024-07-25 01:28:57.041224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.561 [2024-07-25 01:28:57.041241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.561 [2024-07-25 01:28:57.041248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.561 [2024-07-25 01:28:57.041425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.561 [2024-07-25 01:28:57.041602] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.561 [2024-07-25 01:28:57.041614] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.561 [2024-07-25 01:28:57.041621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.561 [2024-07-25 01:28:57.044453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.822 [2024-07-25 01:28:57.053654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.822 [2024-07-25 01:28:57.054324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.822 [2024-07-25 01:28:57.054341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.822 [2024-07-25 01:28:57.054348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.822 [2024-07-25 01:28:57.054525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.822 [2024-07-25 01:28:57.054702] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.822 [2024-07-25 01:28:57.054710] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.822 [2024-07-25 01:28:57.054716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.822 [2024-07-25 01:28:57.057548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.822 [2024-07-25 01:28:57.066747] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.822 [2024-07-25 01:28:57.067431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.822 [2024-07-25 01:28:57.067449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.822 [2024-07-25 01:28:57.067456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.822 [2024-07-25 01:28:57.067633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.822 [2024-07-25 01:28:57.067811] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.822 [2024-07-25 01:28:57.067821] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.822 [2024-07-25 01:28:57.067827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.823 [2024-07-25 01:28:57.070664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.823 [2024-07-25 01:28:57.079873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.823 [2024-07-25 01:28:57.080576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.823 [2024-07-25 01:28:57.080593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.823 [2024-07-25 01:28:57.080600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.823 [2024-07-25 01:28:57.080778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.823 [2024-07-25 01:28:57.080955] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.823 [2024-07-25 01:28:57.080965] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.823 [2024-07-25 01:28:57.080971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.823 [2024-07-25 01:28:57.083808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.823 [2024-07-25 01:28:57.093006] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.823 [2024-07-25 01:28:57.093683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.823 [2024-07-25 01:28:57.093700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.823 [2024-07-25 01:28:57.093707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.823 [2024-07-25 01:28:57.093884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.823 [2024-07-25 01:28:57.094066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.823 [2024-07-25 01:28:57.094076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.823 [2024-07-25 01:28:57.094082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.823 [2024-07-25 01:28:57.096912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.823 [2024-07-25 01:28:57.106113] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.823 [2024-07-25 01:28:57.106811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.823 [2024-07-25 01:28:57.106828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.823 [2024-07-25 01:28:57.106835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.823 [2024-07-25 01:28:57.107012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.823 [2024-07-25 01:28:57.107197] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.823 [2024-07-25 01:28:57.107208] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.823 [2024-07-25 01:28:57.107215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.823 [2024-07-25 01:28:57.110054] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.823 [2024-07-25 01:28:57.119273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.823 [2024-07-25 01:28:57.119948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.823 [2024-07-25 01:28:57.119965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.823 [2024-07-25 01:28:57.119973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.823 [2024-07-25 01:28:57.120156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.823 [2024-07-25 01:28:57.120336] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.823 [2024-07-25 01:28:57.120345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.823 [2024-07-25 01:28:57.120352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.823 [2024-07-25 01:28:57.123183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.823 [2024-07-25 01:28:57.132393] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.823 [2024-07-25 01:28:57.133082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.823 [2024-07-25 01:28:57.133100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.823 [2024-07-25 01:28:57.133107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.823 [2024-07-25 01:28:57.133289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.823 [2024-07-25 01:28:57.133467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.823 [2024-07-25 01:28:57.133477] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.823 [2024-07-25 01:28:57.133483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.823 [2024-07-25 01:28:57.136320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.823 [2024-07-25 01:28:57.145523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.823 [2024-07-25 01:28:57.146221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.823 [2024-07-25 01:28:57.146239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.823 [2024-07-25 01:28:57.146247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.823 [2024-07-25 01:28:57.146423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.823 [2024-07-25 01:28:57.146600] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.823 [2024-07-25 01:28:57.146610] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.823 [2024-07-25 01:28:57.146616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.823 [2024-07-25 01:28:57.149446] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.823 [2024-07-25 01:28:57.158673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.823 [2024-07-25 01:28:57.159355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.823 [2024-07-25 01:28:57.159373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.823 [2024-07-25 01:28:57.159382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.823 [2024-07-25 01:28:57.159560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.823 [2024-07-25 01:28:57.159738] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.823 [2024-07-25 01:28:57.159748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.823 [2024-07-25 01:28:57.159756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.823 [2024-07-25 01:28:57.162592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.823 [2024-07-25 01:28:57.171782] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.823 [2024-07-25 01:28:57.172463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.823 [2024-07-25 01:28:57.172480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.823 [2024-07-25 01:28:57.172488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.823 [2024-07-25 01:28:57.172665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.823 [2024-07-25 01:28:57.172843] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.823 [2024-07-25 01:28:57.172853] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.823 [2024-07-25 01:28:57.172864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.823 [2024-07-25 01:28:57.175700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.823 [2024-07-25 01:28:57.184894] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.823 [2024-07-25 01:28:57.185489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.823 [2024-07-25 01:28:57.185506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.823 [2024-07-25 01:28:57.185513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.823 [2024-07-25 01:28:57.185690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.823 [2024-07-25 01:28:57.185869] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.823 [2024-07-25 01:28:57.185879] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.823 [2024-07-25 01:28:57.185887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.823 [2024-07-25 01:28:57.188723] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.823 [2024-07-25 01:28:57.198087] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.823 [2024-07-25 01:28:57.198807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.823 [2024-07-25 01:28:57.198823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.823 [2024-07-25 01:28:57.198831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.823 [2024-07-25 01:28:57.199007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.823 [2024-07-25 01:28:57.199193] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.823 [2024-07-25 01:28:57.199204] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.824 [2024-07-25 01:28:57.199210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.824 [2024-07-25 01:28:57.202038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.824 [2024-07-25 01:28:57.211233] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.824 [2024-07-25 01:28:57.211922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.824 [2024-07-25 01:28:57.211939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.824 [2024-07-25 01:28:57.211946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.824 [2024-07-25 01:28:57.212127] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.824 [2024-07-25 01:28:57.212304] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.824 [2024-07-25 01:28:57.212312] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.824 [2024-07-25 01:28:57.212319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.824 [2024-07-25 01:28:57.215155] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.824 [2024-07-25 01:28:57.224385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.824 [2024-07-25 01:28:57.225088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.824 [2024-07-25 01:28:57.225112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.824 [2024-07-25 01:28:57.225119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.824 [2024-07-25 01:28:57.225297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.824 [2024-07-25 01:28:57.225475] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.824 [2024-07-25 01:28:57.225485] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.824 [2024-07-25 01:28:57.225491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.824 [2024-07-25 01:28:57.228323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.824 [2024-07-25 01:28:57.237512] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.824 [2024-07-25 01:28:57.238211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.824 [2024-07-25 01:28:57.238228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.824 [2024-07-25 01:28:57.238236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.824 [2024-07-25 01:28:57.238413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.824 [2024-07-25 01:28:57.238591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.824 [2024-07-25 01:28:57.238600] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.824 [2024-07-25 01:28:57.238607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.824 01:28:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:34.824 01:28:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:28:34.824 01:28:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:34.824 01:28:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:34.824 01:28:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:34.824 [2024-07-25 01:28:57.241440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.824 [2024-07-25 01:28:57.250634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.824 [2024-07-25 01:28:57.251290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.824 [2024-07-25 01:28:57.251309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.824 [2024-07-25 01:28:57.251317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.824 [2024-07-25 01:28:57.251494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.824 [2024-07-25 01:28:57.251673] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.824 [2024-07-25 01:28:57.251685] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.824 [2024-07-25 01:28:57.251694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.824 [2024-07-25 01:28:57.254531] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.824 [2024-07-25 01:28:57.263734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.824 [2024-07-25 01:28:57.264352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.824 [2024-07-25 01:28:57.264382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.824 [2024-07-25 01:28:57.264391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.824 [2024-07-25 01:28:57.264570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.824 [2024-07-25 01:28:57.264748] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.824 [2024-07-25 01:28:57.264758] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.824 [2024-07-25 01:28:57.264765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.824 [2024-07-25 01:28:57.267606] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.824 [2024-07-25 01:28:57.276814] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.824 01:28:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:34.824 01:28:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:34.824 [2024-07-25 01:28:57.277515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.824 [2024-07-25 01:28:57.277534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.824 [2024-07-25 01:28:57.277541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.824 01:28:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.824 [2024-07-25 01:28:57.277718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.824 [2024-07-25 01:28:57.277897] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.824 [2024-07-25 01:28:57.277907] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.824 [2024-07-25 01:28:57.277916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.824 01:28:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:34.824 [2024-07-25 01:28:57.280753] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.824 [2024-07-25 01:28:57.282974] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:34.824 [2024-07-25 01:28:57.289953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.824 [2024-07-25 01:28:57.290581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.824 [2024-07-25 01:28:57.290597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.824 [2024-07-25 01:28:57.290604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.824 [2024-07-25 01:28:57.290781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.824 [2024-07-25 01:28:57.290958] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.824 [2024-07-25 01:28:57.290967] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.824 [2024-07-25 01:28:57.290974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.824 [2024-07-25 01:28:57.293809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.824 [2024-07-25 01:28:57.303008] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.824 [2024-07-25 01:28:57.303726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.824 [2024-07-25 01:28:57.303742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:34.824 [2024-07-25 01:28:57.303749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:34.824 [2024-07-25 01:28:57.303926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:34.824 [2024-07-25 01:28:57.304108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.824 [2024-07-25 01:28:57.304117] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.824 [2024-07-25 01:28:57.304124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.824 [2024-07-25 01:28:57.306955] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.824 01:28:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.824 01:28:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:34.824 01:28:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.824 01:28:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:35.084 [2024-07-25 01:28:57.316176] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:35.084 [2024-07-25 01:28:57.316873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.085 [2024-07-25 01:28:57.316890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:35.085 [2024-07-25 01:28:57.316897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:35.085 [2024-07-25 01:28:57.317080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:35.085 [2024-07-25 01:28:57.317260] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:35.085 [2024-07-25 01:28:57.317269] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:35.085 [2024-07-25 01:28:57.317276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:35.085 [2024-07-25 01:28:57.320110] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:35.085 Malloc0 00:28:35.085 [2024-07-25 01:28:57.329319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:35.085 01:28:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.085 [2024-07-25 01:28:57.330020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.085 [2024-07-25 01:28:57.330038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:35.085 [2024-07-25 01:28:57.330052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:35.085 01:28:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:35.085 [2024-07-25 01:28:57.330230] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:35.085 [2024-07-25 01:28:57.330409] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:35.085 [2024-07-25 01:28:57.330418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:35.085 [2024-07-25 01:28:57.330424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:35.085 01:28:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.085 01:28:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:35.085 [2024-07-25 01:28:57.333255] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:35.085 01:28:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.085 01:28:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:35.085 01:28:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.085 01:28:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:35.085 [2024-07-25 01:28:57.342458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:35.085 [2024-07-25 01:28:57.343094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.085 [2024-07-25 01:28:57.343112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d37980 with addr=10.0.0.2, port=4420 00:28:35.085 [2024-07-25 01:28:57.343119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37980 is same with the state(5) to be set 00:28:35.085 [2024-07-25 01:28:57.343296] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37980 (9): Bad file descriptor 00:28:35.085 [2024-07-25 01:28:57.343474] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:35.085 [2024-07-25 01:28:57.343484] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:35.085 [2024-07-25 01:28:57.343490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:35.085 [2024-07-25 01:28:57.346327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:35.085 01:28:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.085 01:28:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:35.085 01:28:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.085 01:28:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:35.085 [2024-07-25 01:28:57.352925] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:35.085 [2024-07-25 01:28:57.355530] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:35.085 01:28:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.085 01:28:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1052247 00:28:35.085 [2024-07-25 01:28:57.394018] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:45.079 00:28:45.079 Latency(us) 00:28:45.079 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.079 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:45.079 Verification LBA range: start 0x0 length 0x4000 00:28:45.079 Nvme1n1 : 15.01 8141.23 31.80 12301.20 0.00 6241.48 1752.38 27354.16 00:28:45.079 =================================================================================================================== 00:28:45.079 Total : 8141.23 31.80 12301.20 0.00 6241.48 1752.38 27354.16 00:28:45.079 01:29:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:28:45.079 01:29:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:45.079 01:29:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.079 01:29:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:45.079 01:29:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.080 01:29:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:45.080 01:29:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:45.080 01:29:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:45.080 01:29:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:28:45.080 01:29:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:45.080 01:29:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:28:45.080 01:29:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:45.080 01:29:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:45.080 rmmod nvme_tcp 00:28:45.080 rmmod nvme_fabrics 00:28:45.080 rmmod nvme_keyring 00:28:45.080 01:29:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:45.080 01:29:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:28:45.080 01:29:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:28:45.080 01:29:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1053333 ']' 00:28:45.080 01:29:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1053333 00:28:45.080 01:29:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 1053333 ']' 00:28:45.080 01:29:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 1053333 00:28:45.080 01:29:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:28:45.080 01:29:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:45.080 01:29:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1053333 00:28:45.080 01:29:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:45.080 01:29:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:45.080 01:29:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1053333' 00:28:45.080 killing process with pid 1053333 00:28:45.080 01:29:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 1053333 00:28:45.080 01:29:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 1053333 00:28:45.080 01:29:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:45.080 01:29:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:45.080 01:29:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:45.080 01:29:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:45.080 01:29:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:45.080 01:29:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:45.080 01:29:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:45.080 01:29:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:46.021 01:29:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:46.021 00:28:46.021 real 0m26.195s 00:28:46.021 user 1m2.816s 00:28:46.021 sys 0m6.233s 00:28:46.021 01:29:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:46.021 01:29:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:46.021 ************************************ 00:28:46.021 END TEST nvmf_bdevperf 00:28:46.021 ************************************ 00:28:46.022 01:29:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:46.022 01:29:08 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:46.022 01:29:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:46.022 01:29:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:46.022 01:29:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:46.022 ************************************ 00:28:46.022 START TEST nvmf_target_disconnect 00:28:46.022 ************************************ 00:28:46.022 01:29:08 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:46.282 * Looking for test storage... 00:28:46.282 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:46.282 01:29:08 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:46.282 01:29:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:28:46.282 01:29:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:46.282 01:29:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:46.282 01:29:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:46.282 01:29:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:46.282 01:29:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:46.282 01:29:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:46.282 01:29:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:46.282 01:29:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:46.282 01:29:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:46.282 01:29:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:46.282 01:29:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:46.282 01:29:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:46.282 01:29:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:46.282 01:29:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:46.282 01:29:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:46.282 01:29:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:46.282 01:29:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:46.282 01:29:08 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:46.282 01:29:08 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:46.282 01:29:08 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:46.282 01:29:08 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.282 01:29:08 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.282 01:29:08 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.282 01:29:08 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:28:46.282 01:29:08 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.282 01:29:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:28:46.282 01:29:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:46.283 01:29:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:46.283 01:29:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:46.283 01:29:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:46.283 01:29:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:46.283 01:29:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:46.283 01:29:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:46.283 01:29:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:46.283 01:29:08 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:46.283 01:29:08 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:46.283 01:29:08 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:46.283 01:29:08 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:28:46.283 01:29:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:46.283 01:29:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:46.283 01:29:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:46.283 01:29:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:46.283 01:29:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:46.283 01:29:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:46.283 01:29:08 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:46.283 01:29:08 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:46.283 01:29:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:46.283 01:29:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:46.283 01:29:08 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:28:46.283 01:29:08 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:51.564 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:51.564 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:51.564 Found net devices under 0000:86:00.0: cvl_0_0 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:51.564 Found net devices under 0000:86:00.1: cvl_0_1 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:51.564 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:51.564 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:28:51.564 00:28:51.564 --- 10.0.0.2 ping statistics --- 00:28:51.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:51.564 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:51.564 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:51.564 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:28:51.564 00:28:51.564 --- 10.0.0.1 ping statistics --- 00:28:51.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:51.564 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:51.564 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:51.565 01:29:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:51.565 01:29:13 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:28:51.565 01:29:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:51.565 01:29:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:51.565 01:29:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:51.565 ************************************ 00:28:51.565 START TEST nvmf_target_disconnect_tc1 00:28:51.565 ************************************ 00:28:51.565 01:29:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:28:51.565 01:29:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:51.565 01:29:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:28:51.565 01:29:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:51.565 01:29:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:51.565 01:29:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:51.565 01:29:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:51.565 01:29:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:51.565 01:29:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:51.565 01:29:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:51.565 01:29:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:51.565 01:29:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:28:51.565 01:29:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:51.565 EAL: No free 2048 kB hugepages reported on node 1 00:28:51.826 [2024-07-25 01:29:14.058386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.826 [2024-07-25 01:29:14.058511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda9e60 with addr=10.0.0.2, port=4420 00:28:51.826 [2024-07-25 01:29:14.058565] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:51.826 [2024-07-25 01:29:14.058596] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:51.826 [2024-07-25 01:29:14.058628] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:28:51.826 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:28:51.826 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:28:51.826 Initializing NVMe Controllers 00:28:51.826 01:29:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:28:51.826 01:29:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:51.826 01:29:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:51.826 01:29:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:51.826 00:28:51.826 real 0m0.098s 00:28:51.826 user 0m0.041s 00:28:51.826 sys 0m0.056s 00:28:51.826 01:29:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:51.826 01:29:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:51.826 ************************************ 00:28:51.826 END TEST nvmf_target_disconnect_tc1 00:28:51.826 ************************************ 00:28:51.826 01:29:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:28:51.826 01:29:14 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:28:51.826 01:29:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:51.826 01:29:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:51.826 01:29:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:51.826 ************************************ 00:28:51.826 START TEST nvmf_target_disconnect_tc2 00:28:51.826 ************************************ 00:28:51.826 01:29:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:28:51.826 01:29:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:28:51.826 01:29:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:51.826 01:29:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:51.826 01:29:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:51.826 01:29:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:51.826 01:29:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1058327 00:28:51.826 01:29:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1058327 00:28:51.826 01:29:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:51.826 01:29:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1058327 ']' 00:28:51.826 01:29:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:51.826 01:29:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:51.826 01:29:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:51.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:51.826 01:29:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:51.826 01:29:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:51.826 [2024-07-25 01:29:14.194655] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:28:51.826 [2024-07-25 01:29:14.194693] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:51.826 EAL: No free 2048 kB hugepages reported on node 1 00:28:51.826 [2024-07-25 01:29:14.263981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:52.086 [2024-07-25 01:29:14.337071] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:52.086 [2024-07-25 01:29:14.337111] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:52.086 [2024-07-25 01:29:14.337118] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:52.086 [2024-07-25 01:29:14.337125] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:52.086 [2024-07-25 01:29:14.337129] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:52.086 [2024-07-25 01:29:14.337245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:28:52.086 [2024-07-25 01:29:14.337354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:28:52.086 [2024-07-25 01:29:14.337465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:28:52.086 [2024-07-25 01:29:14.337464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:52.656 01:29:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:52.656 01:29:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:28:52.656 01:29:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:52.656 01:29:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:52.656 01:29:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:52.656 01:29:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:52.656 01:29:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:52.656 01:29:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.656 01:29:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:52.656 Malloc0 00:28:52.656 01:29:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.656 01:29:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:52.656 01:29:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.656 01:29:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:52.656 [2024-07-25 01:29:15.064828] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:52.656 01:29:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.656 01:29:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:52.656 01:29:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.656 01:29:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:52.656 01:29:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.656 01:29:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:52.656 01:29:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.656 01:29:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:52.656 01:29:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.656 01:29:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:52.656 01:29:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.656 01:29:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:52.656 [2024-07-25 01:29:15.089840] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:52.656 01:29:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.656 01:29:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:52.656 01:29:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.656 01:29:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:52.657 01:29:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.657 01:29:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1058578 00:28:52.657 01:29:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:28:52.657 01:29:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:52.916 EAL: No free 2048 kB hugepages reported on node 1 00:28:54.831 01:29:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1058327 00:28:54.831 01:29:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:28:54.831 Read completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Read completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Read completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Read completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Read completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Read completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Read completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Read completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Read completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Read completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Write completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Write completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Write completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Write completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Read completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Read completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Write completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Read completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Write completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Read completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Write completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Read completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Write completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Read completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Write completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Read completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Read completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Write completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Write completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Write completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Read completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Read completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Read completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Read completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Read completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Read completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 [2024-07-25 01:29:17.117392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:54.831 Read completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Read completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Read completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Read completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Read completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Read completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Read completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Write completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Write completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Read completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Read completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Read completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Read completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Read completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Write completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Write completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Write completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Read completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Read completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Read completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Read completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Read completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Read completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Write completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.831 Read completed with error (sct=0, sc=8) 00:28:54.831 starting I/O failed 00:28:54.832 Read completed with error (sct=0, sc=8) 00:28:54.832 starting I/O failed 00:28:54.832 Write completed with error (sct=0, sc=8) 00:28:54.832 starting I/O failed 00:28:54.832 Read completed with error (sct=0, sc=8) 00:28:54.832 starting I/O failed 00:28:54.832 [2024-07-25 01:29:17.117596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:54.832 Read completed with error (sct=0, sc=8) 00:28:54.832 starting I/O failed 00:28:54.832 Read completed with error (sct=0, sc=8) 00:28:54.832 starting I/O failed 00:28:54.832 Read completed with error (sct=0, sc=8) 00:28:54.832 starting I/O failed 00:28:54.832 Read completed with error (sct=0, sc=8) 00:28:54.832 starting I/O failed 00:28:54.832 Read completed with error (sct=0, sc=8) 00:28:54.832 starting I/O failed 00:28:54.832 Read completed with error (sct=0, sc=8) 00:28:54.832 starting I/O failed 00:28:54.832 Read completed with error (sct=0, sc=8) 00:28:54.832 starting I/O failed 00:28:54.832 Write completed with error (sct=0, sc=8) 00:28:54.832 starting I/O failed 00:28:54.832 Read completed with error (sct=0, sc=8) 00:28:54.832 starting I/O failed 00:28:54.832 Read completed with error (sct=0, sc=8) 00:28:54.832 starting I/O failed 00:28:54.832 Read completed with error (sct=0, sc=8) 00:28:54.832 starting I/O failed 00:28:54.832 Write completed with error (sct=0, sc=8) 00:28:54.832 starting I/O failed 00:28:54.832 Write completed with error (sct=0, sc=8) 00:28:54.832 starting I/O failed 00:28:54.832 Write completed with error (sct=0, sc=8) 00:28:54.832 starting I/O failed 00:28:54.832 Read completed with error (sct=0, sc=8) 00:28:54.832 starting I/O failed 00:28:54.832 Write completed with error (sct=0, sc=8) 00:28:54.832 starting I/O failed 00:28:54.832 Write completed with error (sct=0, sc=8) 00:28:54.832 starting I/O failed 00:28:54.832 Write completed with error (sct=0, sc=8) 00:28:54.832 starting I/O failed 00:28:54.832 Write completed with error (sct=0, sc=8) 00:28:54.832 starting I/O failed 00:28:54.832 Read completed with error (sct=0, sc=8) 00:28:54.832 starting I/O failed 00:28:54.832 Read completed with error (sct=0, sc=8) 00:28:54.832 starting I/O failed 00:28:54.832 Read completed with error (sct=0, sc=8) 00:28:54.832 starting I/O failed 00:28:54.832 Write completed with error (sct=0, sc=8) 00:28:54.832 starting I/O failed 00:28:54.832 Write completed with error (sct=0, sc=8) 00:28:54.832 starting I/O failed 00:28:54.832 Read completed with error (sct=0, sc=8) 00:28:54.832 starting I/O failed 00:28:54.832 Write completed with error (sct=0, sc=8) 00:28:54.832 starting I/O failed 00:28:54.832 Read completed with error (sct=0, sc=8) 00:28:54.832 starting I/O failed 00:28:54.832 Write completed with error (sct=0, sc=8) 00:28:54.832 starting I/O failed 00:28:54.832 Write completed with error (sct=0, sc=8) 00:28:54.832 starting I/O failed 00:28:54.832 Read completed with error (sct=0, sc=8) 00:28:54.832 starting I/O failed 00:28:54.832 Read completed with error (sct=0, sc=8) 00:28:54.832 starting I/O failed 00:28:54.832 Write completed with error (sct=0, sc=8) 00:28:54.832 starting I/O failed 00:28:54.832 [2024-07-25 01:29:17.117811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.832 [2024-07-25 01:29:17.118293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.832 [2024-07-25 01:29:17.118312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.832 qpair failed and we were unable to recover it. 00:28:54.832 [2024-07-25 01:29:17.118732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.832 [2024-07-25 01:29:17.118743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.832 qpair failed and we were unable to recover it. 00:28:54.832 [2024-07-25 01:29:17.119215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.832 [2024-07-25 01:29:17.119248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.832 qpair failed and we were unable to recover it. 00:28:54.832 [2024-07-25 01:29:17.119631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.832 [2024-07-25 01:29:17.119662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.832 qpair failed and we were unable to recover it. 00:28:54.832 [2024-07-25 01:29:17.120221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.832 [2024-07-25 01:29:17.120252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.832 qpair failed and we were unable to recover it. 00:28:54.832 [2024-07-25 01:29:17.120730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.832 [2024-07-25 01:29:17.120760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.832 qpair failed and we were unable to recover it. 00:28:54.832 [2024-07-25 01:29:17.121241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.832 [2024-07-25 01:29:17.121286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.832 qpair failed and we were unable to recover it. 00:28:54.832 [2024-07-25 01:29:17.121805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.832 [2024-07-25 01:29:17.121836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.832 qpair failed and we were unable to recover it. 00:28:54.832 [2024-07-25 01:29:17.122433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.832 [2024-07-25 01:29:17.122466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.832 qpair failed and we were unable to recover it. 00:28:54.832 [2024-07-25 01:29:17.123066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.832 [2024-07-25 01:29:17.123098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.832 qpair failed and we were unable to recover it. 00:28:54.832 [2024-07-25 01:29:17.123648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.832 [2024-07-25 01:29:17.123678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.832 qpair failed and we were unable to recover it. 00:28:54.832 [2024-07-25 01:29:17.124234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.832 [2024-07-25 01:29:17.124275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.832 qpair failed and we were unable to recover it. 00:28:54.832 [2024-07-25 01:29:17.124694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.832 [2024-07-25 01:29:17.124725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.832 qpair failed and we were unable to recover it. 00:28:54.832 [2024-07-25 01:29:17.125256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.832 [2024-07-25 01:29:17.125287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.832 qpair failed and we were unable to recover it. 00:28:54.832 [2024-07-25 01:29:17.125807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.832 [2024-07-25 01:29:17.125837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.832 qpair failed and we were unable to recover it. 00:28:54.832 [2024-07-25 01:29:17.126258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.832 [2024-07-25 01:29:17.126290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.832 qpair failed and we were unable to recover it. 00:28:54.832 [2024-07-25 01:29:17.126814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.832 [2024-07-25 01:29:17.126844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.832 qpair failed and we were unable to recover it. 00:28:54.832 [2024-07-25 01:29:17.127311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.832 [2024-07-25 01:29:17.127343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.832 qpair failed and we were unable to recover it. 00:28:54.832 [2024-07-25 01:29:17.127795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.832 [2024-07-25 01:29:17.127826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.832 qpair failed and we were unable to recover it. 00:28:54.832 [2024-07-25 01:29:17.128380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.832 [2024-07-25 01:29:17.128435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.832 qpair failed and we were unable to recover it. 00:28:54.832 [2024-07-25 01:29:17.128950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.832 [2024-07-25 01:29:17.128981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.832 qpair failed and we were unable to recover it. 00:28:54.832 [2024-07-25 01:29:17.129472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.832 [2024-07-25 01:29:17.129503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.832 qpair failed and we were unable to recover it. 00:28:54.832 [2024-07-25 01:29:17.130054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.832 [2024-07-25 01:29:17.130085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.832 qpair failed and we were unable to recover it. 00:28:54.832 [2024-07-25 01:29:17.130613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.832 [2024-07-25 01:29:17.130644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.832 qpair failed and we were unable to recover it. 00:28:54.832 [2024-07-25 01:29:17.131152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.833 [2024-07-25 01:29:17.131184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.833 qpair failed and we were unable to recover it. 00:28:54.833 [2024-07-25 01:29:17.131647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.833 [2024-07-25 01:29:17.131678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.833 qpair failed and we were unable to recover it. 00:28:54.833 [2024-07-25 01:29:17.132201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.833 [2024-07-25 01:29:17.132232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.833 qpair failed and we were unable to recover it. 00:28:54.833 [2024-07-25 01:29:17.132716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.833 [2024-07-25 01:29:17.132732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.833 qpair failed and we were unable to recover it. 00:28:54.833 [2024-07-25 01:29:17.133234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.833 [2024-07-25 01:29:17.133250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.833 qpair failed and we were unable to recover it. 00:28:54.833 [2024-07-25 01:29:17.133685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.833 [2024-07-25 01:29:17.133700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.833 qpair failed and we were unable to recover it. 00:28:54.833 [2024-07-25 01:29:17.134190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.833 [2024-07-25 01:29:17.134205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.833 qpair failed and we were unable to recover it. 00:28:54.833 [2024-07-25 01:29:17.134696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.833 [2024-07-25 01:29:17.134712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.833 qpair failed and we were unable to recover it. 00:28:54.833 [2024-07-25 01:29:17.135177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.833 [2024-07-25 01:29:17.135192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.833 qpair failed and we were unable to recover it. 00:28:54.833 [2024-07-25 01:29:17.135679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.833 [2024-07-25 01:29:17.135710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.833 qpair failed and we were unable to recover it. 00:28:54.833 [2024-07-25 01:29:17.136258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.833 [2024-07-25 01:29:17.136289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.833 qpair failed and we were unable to recover it. 00:28:54.833 [2024-07-25 01:29:17.136834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.833 [2024-07-25 01:29:17.136865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.833 qpair failed and we were unable to recover it. 00:28:54.833 [2024-07-25 01:29:17.137317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.833 [2024-07-25 01:29:17.137349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.833 qpair failed and we were unable to recover it. 00:28:54.833 [2024-07-25 01:29:17.137735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.833 [2024-07-25 01:29:17.137750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.833 qpair failed and we were unable to recover it. 00:28:54.833 [2024-07-25 01:29:17.138251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.833 [2024-07-25 01:29:17.138266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.833 qpair failed and we were unable to recover it. 00:28:54.833 [2024-07-25 01:29:17.138696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.833 [2024-07-25 01:29:17.138727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.833 qpair failed and we were unable to recover it. 00:28:54.833 [2024-07-25 01:29:17.139250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.833 [2024-07-25 01:29:17.139282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.833 qpair failed and we were unable to recover it. 00:28:54.833 [2024-07-25 01:29:17.139819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.833 [2024-07-25 01:29:17.139834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.833 qpair failed and we were unable to recover it. 00:28:54.833 [2024-07-25 01:29:17.140329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.833 [2024-07-25 01:29:17.140361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.833 qpair failed and we were unable to recover it. 00:28:54.833 [2024-07-25 01:29:17.140909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.833 [2024-07-25 01:29:17.140940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.833 qpair failed and we were unable to recover it. 00:28:54.833 [2024-07-25 01:29:17.141481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.833 [2024-07-25 01:29:17.141513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.833 qpair failed and we were unable to recover it. 00:28:54.833 [2024-07-25 01:29:17.142018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.833 [2024-07-25 01:29:17.142070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.833 qpair failed and we were unable to recover it. 00:28:54.833 [2024-07-25 01:29:17.142594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.833 [2024-07-25 01:29:17.142625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.833 qpair failed and we were unable to recover it. 00:28:54.833 [2024-07-25 01:29:17.143148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.833 [2024-07-25 01:29:17.143181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.833 qpair failed and we were unable to recover it. 00:28:54.833 [2024-07-25 01:29:17.143726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.833 [2024-07-25 01:29:17.143758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.833 qpair failed and we were unable to recover it. 00:28:54.833 [2024-07-25 01:29:17.144302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.833 [2024-07-25 01:29:17.144335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.833 qpair failed and we were unable to recover it. 00:28:54.833 [2024-07-25 01:29:17.144843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.833 [2024-07-25 01:29:17.144875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.833 qpair failed and we were unable to recover it. 00:28:54.833 [2024-07-25 01:29:17.145343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.833 [2024-07-25 01:29:17.145375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.833 qpair failed and we were unable to recover it. 00:28:54.833 [2024-07-25 01:29:17.145935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.833 [2024-07-25 01:29:17.145967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.833 qpair failed and we were unable to recover it. 00:28:54.833 [2024-07-25 01:29:17.146534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.833 [2024-07-25 01:29:17.146566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.833 qpair failed and we were unable to recover it. 00:28:54.833 [2024-07-25 01:29:17.147144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.833 [2024-07-25 01:29:17.147177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.833 qpair failed and we were unable to recover it. 00:28:54.833 [2024-07-25 01:29:17.147752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.833 [2024-07-25 01:29:17.147783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.833 qpair failed and we were unable to recover it. 00:28:54.833 [2024-07-25 01:29:17.148317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.833 [2024-07-25 01:29:17.148349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.833 qpair failed and we were unable to recover it. 00:28:54.833 [2024-07-25 01:29:17.148872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.833 [2024-07-25 01:29:17.148903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.833 qpair failed and we were unable to recover it. 00:28:54.833 [2024-07-25 01:29:17.149444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.833 [2024-07-25 01:29:17.149477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.833 qpair failed and we were unable to recover it. 00:28:54.833 [2024-07-25 01:29:17.150051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.833 [2024-07-25 01:29:17.150084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.833 qpair failed and we were unable to recover it. 00:28:54.833 [2024-07-25 01:29:17.150582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.833 [2024-07-25 01:29:17.150612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.833 qpair failed and we were unable to recover it. 00:28:54.833 [2024-07-25 01:29:17.151129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.833 [2024-07-25 01:29:17.151162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.833 qpair failed and we were unable to recover it. 00:28:54.834 [2024-07-25 01:29:17.151662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.834 [2024-07-25 01:29:17.151694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.834 qpair failed and we were unable to recover it. 00:28:54.834 [2024-07-25 01:29:17.152254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.834 [2024-07-25 01:29:17.152286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.834 qpair failed and we were unable to recover it. 00:28:54.834 [2024-07-25 01:29:17.152836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.834 [2024-07-25 01:29:17.152868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.834 qpair failed and we were unable to recover it. 00:28:54.834 [2024-07-25 01:29:17.153407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.834 [2024-07-25 01:29:17.153440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.834 qpair failed and we were unable to recover it. 00:28:54.834 [2024-07-25 01:29:17.153943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.834 [2024-07-25 01:29:17.153973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.834 qpair failed and we were unable to recover it. 00:28:54.834 [2024-07-25 01:29:17.154525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.834 [2024-07-25 01:29:17.154557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.834 qpair failed and we were unable to recover it. 00:28:54.834 [2024-07-25 01:29:17.154998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.834 [2024-07-25 01:29:17.155029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.834 qpair failed and we were unable to recover it. 00:28:54.834 [2024-07-25 01:29:17.155445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.834 [2024-07-25 01:29:17.155461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.834 qpair failed and we were unable to recover it. 00:28:54.834 [2024-07-25 01:29:17.155924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.834 [2024-07-25 01:29:17.155939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.834 qpair failed and we were unable to recover it. 00:28:54.834 [2024-07-25 01:29:17.156450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.834 [2024-07-25 01:29:17.156481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.834 qpair failed and we were unable to recover it. 00:28:54.834 [2024-07-25 01:29:17.156874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.834 [2024-07-25 01:29:17.156904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.834 qpair failed and we were unable to recover it. 00:28:54.834 [2024-07-25 01:29:17.157346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.834 [2024-07-25 01:29:17.157377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.834 qpair failed and we were unable to recover it. 00:28:54.834 [2024-07-25 01:29:17.157778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.834 [2024-07-25 01:29:17.157809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.834 qpair failed and we were unable to recover it. 00:28:54.834 [2024-07-25 01:29:17.158253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.834 [2024-07-25 01:29:17.158285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.834 qpair failed and we were unable to recover it. 00:28:54.834 [2024-07-25 01:29:17.158724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.834 [2024-07-25 01:29:17.158755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.834 qpair failed and we were unable to recover it. 00:28:54.834 [2024-07-25 01:29:17.159202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.834 [2024-07-25 01:29:17.159234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.834 qpair failed and we were unable to recover it. 00:28:54.834 [2024-07-25 01:29:17.159755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.834 [2024-07-25 01:29:17.159791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.834 qpair failed and we were unable to recover it. 00:28:54.834 [2024-07-25 01:29:17.160359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.834 [2024-07-25 01:29:17.160392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.834 qpair failed and we were unable to recover it. 00:28:54.834 [2024-07-25 01:29:17.160939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.834 [2024-07-25 01:29:17.160970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.834 qpair failed and we were unable to recover it. 00:28:54.834 [2024-07-25 01:29:17.161498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.834 [2024-07-25 01:29:17.161530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.834 qpair failed and we were unable to recover it. 00:28:54.834 [2024-07-25 01:29:17.162100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.834 [2024-07-25 01:29:17.162131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.834 qpair failed and we were unable to recover it. 00:28:54.834 [2024-07-25 01:29:17.162605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.834 [2024-07-25 01:29:17.162636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.834 qpair failed and we were unable to recover it. 00:28:54.834 [2024-07-25 01:29:17.163160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.834 [2024-07-25 01:29:17.163192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.834 qpair failed and we were unable to recover it. 00:28:54.834 [2024-07-25 01:29:17.163688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.834 [2024-07-25 01:29:17.163719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.834 qpair failed and we were unable to recover it. 00:28:54.834 [2024-07-25 01:29:17.164183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.834 [2024-07-25 01:29:17.164217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.834 qpair failed and we were unable to recover it. 00:28:54.834 [2024-07-25 01:29:17.164723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.834 [2024-07-25 01:29:17.164754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.834 qpair failed and we were unable to recover it. 00:28:54.834 [2024-07-25 01:29:17.165135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.834 [2024-07-25 01:29:17.165172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.834 qpair failed and we were unable to recover it. 00:28:54.834 [2024-07-25 01:29:17.165521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.834 [2024-07-25 01:29:17.165535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.834 qpair failed and we were unable to recover it. 00:28:54.834 [2024-07-25 01:29:17.165952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.834 [2024-07-25 01:29:17.165983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.834 qpair failed and we were unable to recover it. 00:28:54.834 [2024-07-25 01:29:17.166365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.834 [2024-07-25 01:29:17.166397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.834 qpair failed and we were unable to recover it. 00:28:54.834 [2024-07-25 01:29:17.166906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.834 [2024-07-25 01:29:17.166921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.834 qpair failed and we were unable to recover it. 00:28:54.834 [2024-07-25 01:29:17.167389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.834 [2024-07-25 01:29:17.167421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.834 qpair failed and we were unable to recover it. 00:28:54.834 [2024-07-25 01:29:17.167976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.834 [2024-07-25 01:29:17.168007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.834 qpair failed and we were unable to recover it. 00:28:54.834 [2024-07-25 01:29:17.168533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.834 [2024-07-25 01:29:17.168565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.834 qpair failed and we were unable to recover it. 00:28:54.834 [2024-07-25 01:29:17.169032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.834 [2024-07-25 01:29:17.169075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.834 qpair failed and we were unable to recover it. 00:28:54.834 [2024-07-25 01:29:17.169556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.834 [2024-07-25 01:29:17.169586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.834 qpair failed and we were unable to recover it. 00:28:54.834 [2024-07-25 01:29:17.169987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.834 [2024-07-25 01:29:17.170017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.834 qpair failed and we were unable to recover it. 00:28:54.835 [2024-07-25 01:29:17.170660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.835 [2024-07-25 01:29:17.170691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.835 qpair failed and we were unable to recover it. 00:28:54.835 [2024-07-25 01:29:17.171219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.835 [2024-07-25 01:29:17.171251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.835 qpair failed and we were unable to recover it. 00:28:54.835 [2024-07-25 01:29:17.171761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.835 [2024-07-25 01:29:17.171792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.835 qpair failed and we were unable to recover it. 00:28:54.835 [2024-07-25 01:29:17.172313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.835 [2024-07-25 01:29:17.172345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.835 qpair failed and we were unable to recover it. 00:28:54.835 [2024-07-25 01:29:17.172837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.835 [2024-07-25 01:29:17.172867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.835 qpair failed and we were unable to recover it. 00:28:54.835 [2024-07-25 01:29:17.173432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.835 [2024-07-25 01:29:17.173465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.835 qpair failed and we were unable to recover it. 00:28:54.835 [2024-07-25 01:29:17.173919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.835 [2024-07-25 01:29:17.173950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.835 qpair failed and we were unable to recover it. 00:28:54.835 [2024-07-25 01:29:17.174470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.835 [2024-07-25 01:29:17.174502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.835 qpair failed and we were unable to recover it. 00:28:54.835 [2024-07-25 01:29:17.175061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.835 [2024-07-25 01:29:17.175098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.835 qpair failed and we were unable to recover it. 00:28:54.835 [2024-07-25 01:29:17.175587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.835 [2024-07-25 01:29:17.175618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.835 qpair failed and we were unable to recover it. 00:28:54.835 [2024-07-25 01:29:17.176160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.835 [2024-07-25 01:29:17.176193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.835 qpair failed and we were unable to recover it. 00:28:54.835 [2024-07-25 01:29:17.176713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.835 [2024-07-25 01:29:17.176744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.835 qpair failed and we were unable to recover it. 00:28:54.835 [2024-07-25 01:29:17.177264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.835 [2024-07-25 01:29:17.177297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.835 qpair failed and we were unable to recover it. 00:28:54.835 [2024-07-25 01:29:17.177680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.835 [2024-07-25 01:29:17.177711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.835 qpair failed and we were unable to recover it. 00:28:54.835 [2024-07-25 01:29:17.178230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.835 [2024-07-25 01:29:17.178263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.835 qpair failed and we were unable to recover it. 00:28:54.835 [2024-07-25 01:29:17.178837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.835 [2024-07-25 01:29:17.178867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.835 qpair failed and we were unable to recover it. 00:28:54.835 [2024-07-25 01:29:17.179339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.835 [2024-07-25 01:29:17.179370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.835 qpair failed and we were unable to recover it. 00:28:54.835 [2024-07-25 01:29:17.179917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.835 [2024-07-25 01:29:17.179948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.835 qpair failed and we were unable to recover it. 00:28:54.835 [2024-07-25 01:29:17.180350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.835 [2024-07-25 01:29:17.180381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.835 qpair failed and we were unable to recover it. 00:28:54.835 [2024-07-25 01:29:17.180847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.835 [2024-07-25 01:29:17.180884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.835 qpair failed and we were unable to recover it. 00:28:54.835 [2024-07-25 01:29:17.181359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.835 [2024-07-25 01:29:17.181390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.835 qpair failed and we were unable to recover it. 00:28:54.835 [2024-07-25 01:29:17.181937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.835 [2024-07-25 01:29:17.181969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.835 qpair failed and we were unable to recover it. 00:28:54.835 [2024-07-25 01:29:17.182488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.835 [2024-07-25 01:29:17.182519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.835 qpair failed and we were unable to recover it. 00:28:54.835 [2024-07-25 01:29:17.182916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.835 [2024-07-25 01:29:17.182931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.835 qpair failed and we were unable to recover it. 00:28:54.835 [2024-07-25 01:29:17.183421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.835 [2024-07-25 01:29:17.183454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.835 qpair failed and we were unable to recover it. 00:28:54.835 [2024-07-25 01:29:17.183861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.835 [2024-07-25 01:29:17.183891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.835 qpair failed and we were unable to recover it. 00:28:54.835 [2024-07-25 01:29:17.184443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.835 [2024-07-25 01:29:17.184475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.835 qpair failed and we were unable to recover it. 00:28:54.835 [2024-07-25 01:29:17.184899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.835 [2024-07-25 01:29:17.184930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.835 qpair failed and we were unable to recover it. 00:28:54.835 [2024-07-25 01:29:17.185428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.835 [2024-07-25 01:29:17.185460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.835 qpair failed and we were unable to recover it. 00:28:54.835 [2024-07-25 01:29:17.186003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.835 [2024-07-25 01:29:17.186035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.835 qpair failed and we were unable to recover it. 00:28:54.835 [2024-07-25 01:29:17.186510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.835 [2024-07-25 01:29:17.186542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.835 qpair failed and we were unable to recover it. 00:28:54.835 [2024-07-25 01:29:17.187081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.835 [2024-07-25 01:29:17.187115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.835 qpair failed and we were unable to recover it. 00:28:54.835 [2024-07-25 01:29:17.187582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.835 [2024-07-25 01:29:17.187614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.835 qpair failed and we were unable to recover it. 00:28:54.835 [2024-07-25 01:29:17.188171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.835 [2024-07-25 01:29:17.188204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.835 qpair failed and we were unable to recover it. 00:28:54.835 [2024-07-25 01:29:17.188757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.835 [2024-07-25 01:29:17.188788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.835 qpair failed and we were unable to recover it. 00:28:54.835 [2024-07-25 01:29:17.189239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.835 [2024-07-25 01:29:17.189271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.835 qpair failed and we were unable to recover it. 00:28:54.835 [2024-07-25 01:29:17.189773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.835 [2024-07-25 01:29:17.189803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.835 qpair failed and we were unable to recover it. 00:28:54.835 [2024-07-25 01:29:17.190310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.835 [2024-07-25 01:29:17.190341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.835 qpair failed and we were unable to recover it. 00:28:54.836 [2024-07-25 01:29:17.190845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.836 [2024-07-25 01:29:17.190876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.836 qpair failed and we were unable to recover it. 00:28:54.836 [2024-07-25 01:29:17.191397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.836 [2024-07-25 01:29:17.191429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.836 qpair failed and we were unable to recover it. 00:28:54.836 [2024-07-25 01:29:17.191823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.836 [2024-07-25 01:29:17.191854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.836 qpair failed and we were unable to recover it. 00:28:54.836 [2024-07-25 01:29:17.192405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.836 [2024-07-25 01:29:17.192437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.836 qpair failed and we were unable to recover it. 00:28:54.836 [2024-07-25 01:29:17.193017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.836 [2024-07-25 01:29:17.193058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.836 qpair failed and we were unable to recover it. 00:28:54.836 [2024-07-25 01:29:17.193628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.836 [2024-07-25 01:29:17.193659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.836 qpair failed and we were unable to recover it. 00:28:54.836 [2024-07-25 01:29:17.194328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.836 [2024-07-25 01:29:17.194360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.836 qpair failed and we were unable to recover it. 00:28:54.836 [2024-07-25 01:29:17.194819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.836 [2024-07-25 01:29:17.194850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.836 qpair failed and we were unable to recover it. 00:28:54.836 [2024-07-25 01:29:17.195380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.836 [2024-07-25 01:29:17.195412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.836 qpair failed and we were unable to recover it. 00:28:54.836 [2024-07-25 01:29:17.195915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.836 [2024-07-25 01:29:17.195947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.836 qpair failed and we were unable to recover it. 00:28:54.836 [2024-07-25 01:29:17.196448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.836 [2024-07-25 01:29:17.196480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.836 qpair failed and we were unable to recover it. 00:28:54.836 [2024-07-25 01:29:17.196998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.836 [2024-07-25 01:29:17.197029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.836 qpair failed and we were unable to recover it. 00:28:54.836 [2024-07-25 01:29:17.197592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.836 [2024-07-25 01:29:17.197624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.836 qpair failed and we were unable to recover it. 00:28:54.836 [2024-07-25 01:29:17.198156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.836 [2024-07-25 01:29:17.198189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.836 qpair failed and we were unable to recover it. 00:28:54.836 [2024-07-25 01:29:17.198709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.836 [2024-07-25 01:29:17.198739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.836 qpair failed and we were unable to recover it. 00:28:54.836 [2024-07-25 01:29:17.199291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.836 [2024-07-25 01:29:17.199323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.836 qpair failed and we were unable to recover it. 00:28:54.836 [2024-07-25 01:29:17.199861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.836 [2024-07-25 01:29:17.199892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.836 qpair failed and we were unable to recover it. 00:28:54.836 [2024-07-25 01:29:17.200441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.836 [2024-07-25 01:29:17.200473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.836 qpair failed and we were unable to recover it. 00:28:54.836 [2024-07-25 01:29:17.200939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.836 [2024-07-25 01:29:17.200969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.836 qpair failed and we were unable to recover it. 00:28:54.836 [2024-07-25 01:29:17.201410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.836 [2024-07-25 01:29:17.201443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.836 qpair failed and we were unable to recover it. 00:28:54.836 [2024-07-25 01:29:17.201894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.836 [2024-07-25 01:29:17.201926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.836 qpair failed and we were unable to recover it. 00:28:54.836 [2024-07-25 01:29:17.202371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.836 [2024-07-25 01:29:17.202421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.836 qpair failed and we were unable to recover it. 00:28:54.836 [2024-07-25 01:29:17.202879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.836 [2024-07-25 01:29:17.202910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.836 qpair failed and we were unable to recover it. 00:28:54.836 [2024-07-25 01:29:17.203399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.836 [2024-07-25 01:29:17.203432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.836 qpair failed and we were unable to recover it. 00:28:54.836 [2024-07-25 01:29:17.203982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.836 [2024-07-25 01:29:17.204013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.836 qpair failed and we were unable to recover it. 00:28:54.836 [2024-07-25 01:29:17.204437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.836 [2024-07-25 01:29:17.204469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.836 qpair failed and we were unable to recover it. 00:28:54.836 [2024-07-25 01:29:17.204997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.836 [2024-07-25 01:29:17.205028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.836 qpair failed and we were unable to recover it. 00:28:54.836 [2024-07-25 01:29:17.205533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.836 [2024-07-25 01:29:17.205565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.836 qpair failed and we were unable to recover it. 00:28:54.836 [2024-07-25 01:29:17.206055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.836 [2024-07-25 01:29:17.206087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.836 qpair failed and we were unable to recover it. 00:28:54.837 [2024-07-25 01:29:17.206552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.837 [2024-07-25 01:29:17.206585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.837 qpair failed and we were unable to recover it. 00:28:54.837 [2024-07-25 01:29:17.207036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.837 [2024-07-25 01:29:17.207078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.837 qpair failed and we were unable to recover it. 00:28:54.837 [2024-07-25 01:29:17.207482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.837 [2024-07-25 01:29:17.207512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.837 qpair failed and we were unable to recover it. 00:28:54.837 [2024-07-25 01:29:17.208038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.837 [2024-07-25 01:29:17.208082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.837 qpair failed and we were unable to recover it. 00:28:54.837 [2024-07-25 01:29:17.208556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.837 [2024-07-25 01:29:17.208587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.837 qpair failed and we were unable to recover it. 00:28:54.837 [2024-07-25 01:29:17.209023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.837 [2024-07-25 01:29:17.209064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.837 qpair failed and we were unable to recover it. 00:28:54.837 [2024-07-25 01:29:17.209621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.837 [2024-07-25 01:29:17.209652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.837 qpair failed and we were unable to recover it. 00:28:54.837 [2024-07-25 01:29:17.210218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.837 [2024-07-25 01:29:17.210251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.837 qpair failed and we were unable to recover it. 00:28:54.837 [2024-07-25 01:29:17.210713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.837 [2024-07-25 01:29:17.210744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.837 qpair failed and we were unable to recover it. 00:28:54.837 [2024-07-25 01:29:17.211268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.837 [2024-07-25 01:29:17.211311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.837 qpair failed and we were unable to recover it. 00:28:54.837 [2024-07-25 01:29:17.211788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.837 [2024-07-25 01:29:17.211803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.837 qpair failed and we were unable to recover it. 00:28:54.837 [2024-07-25 01:29:17.212239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.837 [2024-07-25 01:29:17.212272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.837 qpair failed and we were unable to recover it. 00:28:54.837 [2024-07-25 01:29:17.212792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.837 [2024-07-25 01:29:17.212822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.837 qpair failed and we were unable to recover it. 00:28:54.837 [2024-07-25 01:29:17.213356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.837 [2024-07-25 01:29:17.213388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.837 qpair failed and we were unable to recover it. 00:28:54.837 [2024-07-25 01:29:17.213904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.837 [2024-07-25 01:29:17.213935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.837 qpair failed and we were unable to recover it. 00:28:54.837 [2024-07-25 01:29:17.214410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.837 [2024-07-25 01:29:17.214443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.837 qpair failed and we were unable to recover it. 00:28:54.837 [2024-07-25 01:29:17.214904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.837 [2024-07-25 01:29:17.214935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.837 qpair failed and we were unable to recover it. 00:28:54.837 [2024-07-25 01:29:17.215406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.837 [2024-07-25 01:29:17.215438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.837 qpair failed and we were unable to recover it. 00:28:54.837 [2024-07-25 01:29:17.215919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.837 [2024-07-25 01:29:17.215949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.837 qpair failed and we were unable to recover it. 00:28:54.837 [2024-07-25 01:29:17.216424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.837 [2024-07-25 01:29:17.216457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.837 qpair failed and we were unable to recover it. 00:28:54.837 [2024-07-25 01:29:17.217007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.837 [2024-07-25 01:29:17.217037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.837 qpair failed and we were unable to recover it. 00:28:54.837 [2024-07-25 01:29:17.217572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.837 [2024-07-25 01:29:17.217603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.837 qpair failed and we were unable to recover it. 00:28:54.837 [2024-07-25 01:29:17.217999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.837 [2024-07-25 01:29:17.218029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.837 qpair failed and we were unable to recover it. 00:28:54.837 [2024-07-25 01:29:17.218489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.837 [2024-07-25 01:29:17.218521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.837 qpair failed and we were unable to recover it. 00:28:54.837 [2024-07-25 01:29:17.219078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.837 [2024-07-25 01:29:17.219110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.837 qpair failed and we were unable to recover it. 00:28:54.837 [2024-07-25 01:29:17.219638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.837 [2024-07-25 01:29:17.219669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.837 qpair failed and we were unable to recover it. 00:28:54.837 [2024-07-25 01:29:17.220189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.837 [2024-07-25 01:29:17.220221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.837 qpair failed and we were unable to recover it. 00:28:54.837 [2024-07-25 01:29:17.220742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.837 [2024-07-25 01:29:17.220773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.837 qpair failed and we were unable to recover it. 00:28:54.837 [2024-07-25 01:29:17.221318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.837 [2024-07-25 01:29:17.221350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.837 qpair failed and we were unable to recover it. 00:28:54.837 [2024-07-25 01:29:17.221803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.837 [2024-07-25 01:29:17.221834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.837 qpair failed and we were unable to recover it. 00:28:54.837 [2024-07-25 01:29:17.222351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.837 [2024-07-25 01:29:17.222382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.837 qpair failed and we were unable to recover it. 00:28:54.837 [2024-07-25 01:29:17.222928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.837 [2024-07-25 01:29:17.222958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.837 qpair failed and we were unable to recover it. 00:28:54.837 [2024-07-25 01:29:17.223495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.837 [2024-07-25 01:29:17.223532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.837 qpair failed and we were unable to recover it. 00:28:54.837 [2024-07-25 01:29:17.224134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.837 [2024-07-25 01:29:17.224167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.837 qpair failed and we were unable to recover it. 00:28:54.837 [2024-07-25 01:29:17.224743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.837 [2024-07-25 01:29:17.224774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.837 qpair failed and we were unable to recover it. 00:28:54.837 [2024-07-25 01:29:17.225344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.837 [2024-07-25 01:29:17.225377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.837 qpair failed and we were unable to recover it. 00:28:54.837 [2024-07-25 01:29:17.225927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.837 [2024-07-25 01:29:17.225959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.837 qpair failed and we were unable to recover it. 00:28:54.837 [2024-07-25 01:29:17.226484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-25 01:29:17.226517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.838 qpair failed and we were unable to recover it. 00:28:54.838 [2024-07-25 01:29:17.227071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-25 01:29:17.227103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.838 qpair failed and we were unable to recover it. 00:28:54.838 [2024-07-25 01:29:17.227560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-25 01:29:17.227591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.838 qpair failed and we were unable to recover it. 00:28:54.838 [2024-07-25 01:29:17.228054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-25 01:29:17.228086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.838 qpair failed and we were unable to recover it. 00:28:54.838 [2024-07-25 01:29:17.228649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-25 01:29:17.228664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.838 qpair failed and we were unable to recover it. 00:28:54.838 [2024-07-25 01:29:17.229191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-25 01:29:17.229223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.838 qpair failed and we were unable to recover it. 00:28:54.838 [2024-07-25 01:29:17.229658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-25 01:29:17.229690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.838 qpair failed and we were unable to recover it. 00:28:54.838 [2024-07-25 01:29:17.230196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-25 01:29:17.230228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.838 qpair failed and we were unable to recover it. 00:28:54.838 [2024-07-25 01:29:17.230783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-25 01:29:17.230814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.838 qpair failed and we were unable to recover it. 00:28:54.838 [2024-07-25 01:29:17.231358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-25 01:29:17.231391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.838 qpair failed and we were unable to recover it. 00:28:54.838 [2024-07-25 01:29:17.231905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-25 01:29:17.231937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.838 qpair failed and we were unable to recover it. 00:28:54.838 [2024-07-25 01:29:17.232463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-25 01:29:17.232507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.838 qpair failed and we were unable to recover it. 00:28:54.838 [2024-07-25 01:29:17.233103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-25 01:29:17.233136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.838 qpair failed and we were unable to recover it. 00:28:54.838 [2024-07-25 01:29:17.233738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-25 01:29:17.233769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.838 qpair failed and we were unable to recover it. 00:28:54.838 [2024-07-25 01:29:17.234231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-25 01:29:17.234263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.838 qpair failed and we were unable to recover it. 00:28:54.838 [2024-07-25 01:29:17.234739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-25 01:29:17.234771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.838 qpair failed and we were unable to recover it. 00:28:54.838 [2024-07-25 01:29:17.235252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-25 01:29:17.235284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.838 qpair failed and we were unable to recover it. 00:28:54.838 [2024-07-25 01:29:17.235765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-25 01:29:17.235796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.838 qpair failed and we were unable to recover it. 00:28:54.838 [2024-07-25 01:29:17.236279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-25 01:29:17.236296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.838 qpair failed and we were unable to recover it. 00:28:54.838 [2024-07-25 01:29:17.236717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-25 01:29:17.236732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.838 qpair failed and we were unable to recover it. 00:28:54.838 [2024-07-25 01:29:17.237231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-25 01:29:17.237264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.838 qpair failed and we were unable to recover it. 00:28:54.838 [2024-07-25 01:29:17.237818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-25 01:29:17.237849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.838 qpair failed and we were unable to recover it. 00:28:54.838 [2024-07-25 01:29:17.238412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-25 01:29:17.238445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.838 qpair failed and we were unable to recover it. 00:28:54.838 [2024-07-25 01:29:17.239006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-25 01:29:17.239037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.838 qpair failed and we were unable to recover it. 00:28:54.838 [2024-07-25 01:29:17.239600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-25 01:29:17.239650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.838 qpair failed and we were unable to recover it. 00:28:54.838 [2024-07-25 01:29:17.240092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-25 01:29:17.240108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.838 qpair failed and we were unable to recover it. 00:28:54.838 [2024-07-25 01:29:17.240582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-25 01:29:17.240614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.838 qpair failed and we were unable to recover it. 00:28:54.838 [2024-07-25 01:29:17.241126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-25 01:29:17.241160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.838 qpair failed and we were unable to recover it. 00:28:54.838 [2024-07-25 01:29:17.241709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-25 01:29:17.241741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.838 qpair failed and we were unable to recover it. 00:28:54.838 [2024-07-25 01:29:17.242211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-25 01:29:17.242243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.838 qpair failed and we were unable to recover it. 00:28:54.838 [2024-07-25 01:29:17.242702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-25 01:29:17.242732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.838 qpair failed and we were unable to recover it. 00:28:54.838 [2024-07-25 01:29:17.243237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-25 01:29:17.243269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.838 qpair failed and we were unable to recover it. 00:28:54.838 [2024-07-25 01:29:17.243817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-25 01:29:17.243847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.838 qpair failed and we were unable to recover it. 00:28:54.838 [2024-07-25 01:29:17.244347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-25 01:29:17.244379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.838 qpair failed and we were unable to recover it. 00:28:54.838 [2024-07-25 01:29:17.244837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-25 01:29:17.244868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.838 qpair failed and we were unable to recover it. 00:28:54.838 [2024-07-25 01:29:17.245375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-25 01:29:17.245413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.838 qpair failed and we were unable to recover it. 00:28:54.838 [2024-07-25 01:29:17.245729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-25 01:29:17.245760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.838 qpair failed and we were unable to recover it. 00:28:54.838 [2024-07-25 01:29:17.246262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.838 [2024-07-25 01:29:17.246293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.839 qpair failed and we were unable to recover it. 00:28:54.839 [2024-07-25 01:29:17.246838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-25 01:29:17.246869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.839 qpair failed and we were unable to recover it. 00:28:54.839 [2024-07-25 01:29:17.247423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-25 01:29:17.247455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.839 qpair failed and we were unable to recover it. 00:28:54.839 [2024-07-25 01:29:17.247908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-25 01:29:17.247940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.839 qpair failed and we were unable to recover it. 00:28:54.839 [2024-07-25 01:29:17.248454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-25 01:29:17.248487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.839 qpair failed and we were unable to recover it. 00:28:54.839 [2024-07-25 01:29:17.248928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-25 01:29:17.248959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.839 qpair failed and we were unable to recover it. 00:28:54.839 [2024-07-25 01:29:17.249418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-25 01:29:17.249450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.839 qpair failed and we were unable to recover it. 00:28:54.839 [2024-07-25 01:29:17.249903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-25 01:29:17.249919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.839 qpair failed and we were unable to recover it. 00:28:54.839 [2024-07-25 01:29:17.250322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-25 01:29:17.250338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.839 qpair failed and we were unable to recover it. 00:28:54.839 [2024-07-25 01:29:17.250764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-25 01:29:17.250795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.839 qpair failed and we were unable to recover it. 00:28:54.839 [2024-07-25 01:29:17.251363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-25 01:29:17.251396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.839 qpair failed and we were unable to recover it. 00:28:54.839 [2024-07-25 01:29:17.251893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-25 01:29:17.251925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.839 qpair failed and we were unable to recover it. 00:28:54.839 [2024-07-25 01:29:17.252467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-25 01:29:17.252500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.839 qpair failed and we were unable to recover it. 00:28:54.839 [2024-07-25 01:29:17.252976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-25 01:29:17.253008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.839 qpair failed and we were unable to recover it. 00:28:54.839 [2024-07-25 01:29:17.253568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-25 01:29:17.253601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.839 qpair failed and we were unable to recover it. 00:28:54.839 [2024-07-25 01:29:17.254160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-25 01:29:17.254193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.839 qpair failed and we were unable to recover it. 00:28:54.839 [2024-07-25 01:29:17.254719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-25 01:29:17.254751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.839 qpair failed and we were unable to recover it. 00:28:54.839 [2024-07-25 01:29:17.255211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-25 01:29:17.255244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.839 qpair failed and we were unable to recover it. 00:28:54.839 [2024-07-25 01:29:17.255642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-25 01:29:17.255673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.839 qpair failed and we were unable to recover it. 00:28:54.839 [2024-07-25 01:29:17.256121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-25 01:29:17.256153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.839 qpair failed and we were unable to recover it. 00:28:54.839 [2024-07-25 01:29:17.256434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-25 01:29:17.256464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.839 qpair failed and we were unable to recover it. 00:28:54.839 [2024-07-25 01:29:17.256968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-25 01:29:17.256999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.839 qpair failed and we were unable to recover it. 00:28:54.839 [2024-07-25 01:29:17.257478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-25 01:29:17.257511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.839 qpair failed and we were unable to recover it. 00:28:54.839 [2024-07-25 01:29:17.257963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-25 01:29:17.257994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.839 qpair failed and we were unable to recover it. 00:28:54.839 [2024-07-25 01:29:17.258406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-25 01:29:17.258438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.839 qpair failed and we were unable to recover it. 00:28:54.839 [2024-07-25 01:29:17.258983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-25 01:29:17.259015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.839 qpair failed and we were unable to recover it. 00:28:54.839 [2024-07-25 01:29:17.259565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-25 01:29:17.259598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.839 qpair failed and we were unable to recover it. 00:28:54.839 [2024-07-25 01:29:17.260079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-25 01:29:17.260111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.839 qpair failed and we were unable to recover it. 00:28:54.839 [2024-07-25 01:29:17.260552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-25 01:29:17.260582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.839 qpair failed and we were unable to recover it. 00:28:54.839 [2024-07-25 01:29:17.261033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-25 01:29:17.261074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.839 qpair failed and we were unable to recover it. 00:28:54.839 [2024-07-25 01:29:17.261527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-25 01:29:17.261559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.839 qpair failed and we were unable to recover it. 00:28:54.839 [2024-07-25 01:29:17.262002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-25 01:29:17.262032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.839 qpair failed and we were unable to recover it. 00:28:54.839 [2024-07-25 01:29:17.262547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-25 01:29:17.262578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.839 qpair failed and we were unable to recover it. 00:28:54.839 [2024-07-25 01:29:17.263110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-25 01:29:17.263142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.839 qpair failed and we were unable to recover it. 00:28:54.839 [2024-07-25 01:29:17.263662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-25 01:29:17.263693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.839 qpair failed and we were unable to recover it. 00:28:54.839 [2024-07-25 01:29:17.264242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-25 01:29:17.264275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.839 qpair failed and we were unable to recover it. 00:28:54.839 [2024-07-25 01:29:17.264741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.839 [2024-07-25 01:29:17.264772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.840 qpair failed and we were unable to recover it. 00:28:54.840 [2024-07-25 01:29:17.265273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-07-25 01:29:17.265305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.840 qpair failed and we were unable to recover it. 00:28:54.840 [2024-07-25 01:29:17.265751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-07-25 01:29:17.265787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.840 qpair failed and we were unable to recover it. 00:28:54.840 [2024-07-25 01:29:17.266311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-07-25 01:29:17.266343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.840 qpair failed and we were unable to recover it. 00:28:54.840 [2024-07-25 01:29:17.266809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-07-25 01:29:17.266839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.840 qpair failed and we were unable to recover it. 00:28:54.840 [2024-07-25 01:29:17.267369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-07-25 01:29:17.267401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.840 qpair failed and we were unable to recover it. 00:28:54.840 [2024-07-25 01:29:17.267843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-07-25 01:29:17.267874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.840 qpair failed and we were unable to recover it. 00:28:54.840 [2024-07-25 01:29:17.268388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-07-25 01:29:17.268421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.840 qpair failed and we were unable to recover it. 00:28:54.840 [2024-07-25 01:29:17.268949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-07-25 01:29:17.268979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.840 qpair failed and we were unable to recover it. 00:28:54.840 [2024-07-25 01:29:17.269441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-07-25 01:29:17.269474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.840 qpair failed and we were unable to recover it. 00:28:54.840 [2024-07-25 01:29:17.270022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-07-25 01:29:17.270064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.840 qpair failed and we were unable to recover it. 00:28:54.840 [2024-07-25 01:29:17.270581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-07-25 01:29:17.270612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.840 qpair failed and we were unable to recover it. 00:28:54.840 [2024-07-25 01:29:17.271074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-07-25 01:29:17.271105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.840 qpair failed and we were unable to recover it. 00:28:54.840 [2024-07-25 01:29:17.271561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-07-25 01:29:17.271591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.840 qpair failed and we were unable to recover it. 00:28:54.840 [2024-07-25 01:29:17.272327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-07-25 01:29:17.272362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.840 qpair failed and we were unable to recover it. 00:28:54.840 [2024-07-25 01:29:17.272803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-07-25 01:29:17.272818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.840 qpair failed and we were unable to recover it. 00:28:54.840 [2024-07-25 01:29:17.273261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-07-25 01:29:17.273293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.840 qpair failed and we were unable to recover it. 00:28:54.840 [2024-07-25 01:29:17.273748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-07-25 01:29:17.273779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.840 qpair failed and we were unable to recover it. 00:28:54.840 [2024-07-25 01:29:17.274209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-07-25 01:29:17.274225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.840 qpair failed and we were unable to recover it. 00:28:54.840 [2024-07-25 01:29:17.274695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-07-25 01:29:17.274727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.840 qpair failed and we were unable to recover it. 00:28:54.840 [2024-07-25 01:29:17.275193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-07-25 01:29:17.275210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.840 qpair failed and we were unable to recover it. 00:28:54.840 [2024-07-25 01:29:17.275706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-07-25 01:29:17.275737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.840 qpair failed and we were unable to recover it. 00:28:54.840 [2024-07-25 01:29:17.276236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-07-25 01:29:17.276270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.840 qpair failed and we were unable to recover it. 00:28:54.840 [2024-07-25 01:29:17.276669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-07-25 01:29:17.276701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.840 qpair failed and we were unable to recover it. 00:28:54.840 [2024-07-25 01:29:17.277146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-07-25 01:29:17.277178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.840 qpair failed and we were unable to recover it. 00:28:54.840 [2024-07-25 01:29:17.277621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-07-25 01:29:17.277653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.840 qpair failed and we were unable to recover it. 00:28:54.840 [2024-07-25 01:29:17.278034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-07-25 01:29:17.278074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.840 qpair failed and we were unable to recover it. 00:28:54.840 [2024-07-25 01:29:17.278519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-07-25 01:29:17.278550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.840 qpair failed and we were unable to recover it. 00:28:54.840 [2024-07-25 01:29:17.279111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-07-25 01:29:17.279142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.840 qpair failed and we were unable to recover it. 00:28:54.840 [2024-07-25 01:29:17.279671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-07-25 01:29:17.279701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.840 qpair failed and we were unable to recover it. 00:28:54.840 [2024-07-25 01:29:17.280152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.840 [2024-07-25 01:29:17.280185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.840 qpair failed and we were unable to recover it. 00:28:54.840 [2024-07-25 01:29:17.280709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-07-25 01:29:17.280741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.841 qpair failed and we were unable to recover it. 00:28:54.841 [2024-07-25 01:29:17.281185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-07-25 01:29:17.281217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.841 qpair failed and we were unable to recover it. 00:28:54.841 [2024-07-25 01:29:17.281664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-07-25 01:29:17.281695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.841 qpair failed and we were unable to recover it. 00:28:54.841 [2024-07-25 01:29:17.282150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-07-25 01:29:17.282166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.841 qpair failed and we were unable to recover it. 00:28:54.841 [2024-07-25 01:29:17.282638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-07-25 01:29:17.282669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.841 qpair failed and we were unable to recover it. 00:28:54.841 [2024-07-25 01:29:17.283062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-07-25 01:29:17.283105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.841 qpair failed and we were unable to recover it. 00:28:54.841 [2024-07-25 01:29:17.283544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-07-25 01:29:17.283585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.841 qpair failed and we were unable to recover it. 00:28:54.841 [2024-07-25 01:29:17.284056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-07-25 01:29:17.284072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.841 qpair failed and we were unable to recover it. 00:28:54.841 [2024-07-25 01:29:17.284596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-07-25 01:29:17.284611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.841 qpair failed and we were unable to recover it. 00:28:54.841 [2024-07-25 01:29:17.285036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-07-25 01:29:17.285079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.841 qpair failed and we were unable to recover it. 00:28:54.841 [2024-07-25 01:29:17.285562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-07-25 01:29:17.285593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.841 qpair failed and we were unable to recover it. 00:28:54.841 [2024-07-25 01:29:17.285978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-07-25 01:29:17.286014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.841 qpair failed and we were unable to recover it. 00:28:54.841 [2024-07-25 01:29:17.286548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-07-25 01:29:17.286581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.841 qpair failed and we were unable to recover it. 00:28:54.841 [2024-07-25 01:29:17.287082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-07-25 01:29:17.287115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.841 qpair failed and we were unable to recover it. 00:28:54.841 [2024-07-25 01:29:17.287584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-07-25 01:29:17.287614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.841 qpair failed and we were unable to recover it. 00:28:54.841 [2024-07-25 01:29:17.288087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-07-25 01:29:17.288120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.841 qpair failed and we were unable to recover it. 00:28:54.841 [2024-07-25 01:29:17.288372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-07-25 01:29:17.288403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.841 qpair failed and we were unable to recover it. 00:28:54.841 [2024-07-25 01:29:17.288849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-07-25 01:29:17.288879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.841 qpair failed and we were unable to recover it. 00:28:54.841 [2024-07-25 01:29:17.289334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-07-25 01:29:17.289366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.841 qpair failed and we were unable to recover it. 00:28:54.841 [2024-07-25 01:29:17.289594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-07-25 01:29:17.289625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.841 qpair failed and we were unable to recover it. 00:28:54.841 [2024-07-25 01:29:17.290140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-07-25 01:29:17.290173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.841 qpair failed and we were unable to recover it. 00:28:54.841 [2024-07-25 01:29:17.290618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-07-25 01:29:17.290649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.841 qpair failed and we were unable to recover it. 00:28:54.841 [2024-07-25 01:29:17.291032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-07-25 01:29:17.291076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.841 qpair failed and we were unable to recover it. 00:28:54.841 [2024-07-25 01:29:17.291547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-07-25 01:29:17.291578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.841 qpair failed and we were unable to recover it. 00:28:54.841 [2024-07-25 01:29:17.291972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-07-25 01:29:17.292002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.841 qpair failed and we were unable to recover it. 00:28:54.841 [2024-07-25 01:29:17.292463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-07-25 01:29:17.292495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.841 qpair failed and we were unable to recover it. 00:28:54.841 [2024-07-25 01:29:17.292946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-07-25 01:29:17.292976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.841 qpair failed and we were unable to recover it. 00:28:54.841 [2024-07-25 01:29:17.293421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-07-25 01:29:17.293454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.841 qpair failed and we were unable to recover it. 00:28:54.841 [2024-07-25 01:29:17.293922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-07-25 01:29:17.293952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.841 qpair failed and we were unable to recover it. 00:28:54.841 [2024-07-25 01:29:17.294338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-07-25 01:29:17.294369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.841 qpair failed and we were unable to recover it. 00:28:54.841 [2024-07-25 01:29:17.294841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-07-25 01:29:17.294871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.841 qpair failed and we were unable to recover it. 00:28:54.841 [2024-07-25 01:29:17.295258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-07-25 01:29:17.295290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.841 qpair failed and we were unable to recover it. 00:28:54.841 [2024-07-25 01:29:17.295728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-07-25 01:29:17.295758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.841 qpair failed and we were unable to recover it. 00:28:54.841 [2024-07-25 01:29:17.296279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-07-25 01:29:17.296311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.841 qpair failed and we were unable to recover it. 00:28:54.841 [2024-07-25 01:29:17.296791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-07-25 01:29:17.296821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.841 qpair failed and we were unable to recover it. 00:28:54.841 [2024-07-25 01:29:17.297267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-07-25 01:29:17.297299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.841 qpair failed and we were unable to recover it. 00:28:54.841 [2024-07-25 01:29:17.297731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-07-25 01:29:17.297775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.841 qpair failed and we were unable to recover it. 00:28:54.841 [2024-07-25 01:29:17.298193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.841 [2024-07-25 01:29:17.298224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.841 qpair failed and we were unable to recover it. 00:28:54.842 [2024-07-25 01:29:17.298669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.842 [2024-07-25 01:29:17.298700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.842 qpair failed and we were unable to recover it. 00:28:54.842 [2024-07-25 01:29:17.299231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.842 [2024-07-25 01:29:17.299263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.842 qpair failed and we were unable to recover it. 00:28:54.842 [2024-07-25 01:29:17.299700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.842 [2024-07-25 01:29:17.299730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.842 qpair failed and we were unable to recover it. 00:28:54.842 [2024-07-25 01:29:17.300181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.842 [2024-07-25 01:29:17.300196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.842 qpair failed and we were unable to recover it. 00:28:54.842 [2024-07-25 01:29:17.300690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.842 [2024-07-25 01:29:17.300720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.842 qpair failed and we were unable to recover it. 00:28:54.842 [2024-07-25 01:29:17.301063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.842 [2024-07-25 01:29:17.301095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.842 qpair failed and we were unable to recover it. 00:28:54.842 [2024-07-25 01:29:17.301527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.842 [2024-07-25 01:29:17.301557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.842 qpair failed and we were unable to recover it. 00:28:54.842 [2024-07-25 01:29:17.301945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.842 [2024-07-25 01:29:17.301975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.842 qpair failed and we were unable to recover it. 00:28:54.842 [2024-07-25 01:29:17.302378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.842 [2024-07-25 01:29:17.302409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.842 qpair failed and we were unable to recover it. 00:28:54.842 [2024-07-25 01:29:17.302803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.842 [2024-07-25 01:29:17.302833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.842 qpair failed and we were unable to recover it. 00:28:54.842 [2024-07-25 01:29:17.303330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.842 [2024-07-25 01:29:17.303363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.842 qpair failed and we were unable to recover it. 00:28:54.842 [2024-07-25 01:29:17.303739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.842 [2024-07-25 01:29:17.303769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.842 qpair failed and we were unable to recover it. 00:28:54.842 [2024-07-25 01:29:17.303958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.842 [2024-07-25 01:29:17.303973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.842 qpair failed and we were unable to recover it. 00:28:54.842 [2024-07-25 01:29:17.304391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.842 [2024-07-25 01:29:17.304428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.842 qpair failed and we were unable to recover it. 00:28:54.842 [2024-07-25 01:29:17.304951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.842 [2024-07-25 01:29:17.304981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.842 qpair failed and we were unable to recover it. 00:28:54.842 [2024-07-25 01:29:17.305420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.842 [2024-07-25 01:29:17.305451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.842 qpair failed and we were unable to recover it. 00:28:54.842 [2024-07-25 01:29:17.305959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.842 [2024-07-25 01:29:17.305989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.842 qpair failed and we were unable to recover it. 00:28:54.842 [2024-07-25 01:29:17.306500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.842 [2024-07-25 01:29:17.306532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.842 qpair failed and we were unable to recover it. 00:28:54.842 [2024-07-25 01:29:17.306968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.842 [2024-07-25 01:29:17.306998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.842 qpair failed and we were unable to recover it. 00:28:54.842 [2024-07-25 01:29:17.307424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.842 [2024-07-25 01:29:17.307455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.842 qpair failed and we were unable to recover it. 00:28:54.842 [2024-07-25 01:29:17.307877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.842 [2024-07-25 01:29:17.307908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.842 qpair failed and we were unable to recover it. 00:28:54.842 [2024-07-25 01:29:17.308289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.842 [2024-07-25 01:29:17.308322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.842 qpair failed and we were unable to recover it. 00:28:54.842 [2024-07-25 01:29:17.308841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.842 [2024-07-25 01:29:17.308882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.842 qpair failed and we were unable to recover it. 00:28:54.842 [2024-07-25 01:29:17.309244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.842 [2024-07-25 01:29:17.309276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.842 qpair failed and we were unable to recover it. 00:28:54.842 [2024-07-25 01:29:17.309800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.842 [2024-07-25 01:29:17.309831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.842 qpair failed and we were unable to recover it. 00:28:54.842 [2024-07-25 01:29:17.310280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.842 [2024-07-25 01:29:17.310312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.842 qpair failed and we were unable to recover it. 00:28:54.842 [2024-07-25 01:29:17.310827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.842 [2024-07-25 01:29:17.310869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.842 qpair failed and we were unable to recover it. 00:28:54.842 [2024-07-25 01:29:17.311223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.842 [2024-07-25 01:29:17.311255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.842 qpair failed and we were unable to recover it. 00:28:54.842 [2024-07-25 01:29:17.311631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.842 [2024-07-25 01:29:17.311662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.842 qpair failed and we were unable to recover it. 00:28:54.842 [2024-07-25 01:29:17.312107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.842 [2024-07-25 01:29:17.312123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.842 qpair failed and we were unable to recover it. 00:28:54.842 [2024-07-25 01:29:17.312483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.842 [2024-07-25 01:29:17.312498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.842 qpair failed and we were unable to recover it. 00:28:54.842 [2024-07-25 01:29:17.312891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.842 [2024-07-25 01:29:17.312922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.842 qpair failed and we were unable to recover it. 00:28:54.842 [2024-07-25 01:29:17.313554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.842 [2024-07-25 01:29:17.313585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.842 qpair failed and we were unable to recover it. 00:28:54.842 [2024-07-25 01:29:17.313957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.842 [2024-07-25 01:29:17.313997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.842 qpair failed and we were unable to recover it. 00:28:54.842 [2024-07-25 01:29:17.314367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.842 [2024-07-25 01:29:17.314398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.842 qpair failed and we were unable to recover it. 00:28:54.842 [2024-07-25 01:29:17.314933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.842 [2024-07-25 01:29:17.314948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.842 qpair failed and we were unable to recover it. 00:28:54.842 [2024-07-25 01:29:17.315352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.842 [2024-07-25 01:29:17.315367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.842 qpair failed and we were unable to recover it. 00:28:54.843 [2024-07-25 01:29:17.315796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.843 [2024-07-25 01:29:17.315826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.843 qpair failed and we were unable to recover it. 00:28:54.843 [2024-07-25 01:29:17.316298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.843 [2024-07-25 01:29:17.316330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.843 qpair failed and we were unable to recover it. 00:28:54.843 [2024-07-25 01:29:17.316740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.843 [2024-07-25 01:29:17.316770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.843 qpair failed and we were unable to recover it. 00:28:54.843 [2024-07-25 01:29:17.317226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.843 [2024-07-25 01:29:17.317258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:54.843 qpair failed and we were unable to recover it. 00:28:55.136 [2024-07-25 01:29:17.317804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.136 [2024-07-25 01:29:17.317846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.136 qpair failed and we were unable to recover it. 00:28:55.136 [2024-07-25 01:29:17.318275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.136 [2024-07-25 01:29:17.318308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.136 qpair failed and we were unable to recover it. 00:28:55.136 [2024-07-25 01:29:17.318698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.136 [2024-07-25 01:29:17.318729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.136 qpair failed and we were unable to recover it. 00:28:55.136 [2024-07-25 01:29:17.319183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.136 [2024-07-25 01:29:17.319215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.136 qpair failed and we were unable to recover it. 00:28:55.136 [2024-07-25 01:29:17.319598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.136 [2024-07-25 01:29:17.319628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.136 qpair failed and we were unable to recover it. 00:28:55.136 [2024-07-25 01:29:17.320080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.136 [2024-07-25 01:29:17.320113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.136 qpair failed and we were unable to recover it. 00:28:55.136 [2024-07-25 01:29:17.320510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.136 [2024-07-25 01:29:17.320541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.136 qpair failed and we were unable to recover it. 00:28:55.136 [2024-07-25 01:29:17.320977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.136 [2024-07-25 01:29:17.320991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.136 qpair failed and we were unable to recover it. 00:28:55.136 [2024-07-25 01:29:17.321354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.136 [2024-07-25 01:29:17.321369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.136 qpair failed and we were unable to recover it. 00:28:55.136 [2024-07-25 01:29:17.321787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.136 [2024-07-25 01:29:17.321801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.136 qpair failed and we were unable to recover it. 00:28:55.136 [2024-07-25 01:29:17.322320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.136 [2024-07-25 01:29:17.322335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.136 qpair failed and we were unable to recover it. 00:28:55.136 [2024-07-25 01:29:17.322674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.136 [2024-07-25 01:29:17.322689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.136 qpair failed and we were unable to recover it. 00:28:55.136 [2024-07-25 01:29:17.323118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.136 [2024-07-25 01:29:17.323136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.136 qpair failed and we were unable to recover it. 00:28:55.136 [2024-07-25 01:29:17.323583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.136 [2024-07-25 01:29:17.323613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.136 qpair failed and we were unable to recover it. 00:28:55.136 [2024-07-25 01:29:17.324070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.136 [2024-07-25 01:29:17.324086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.136 qpair failed and we were unable to recover it. 00:28:55.136 [2024-07-25 01:29:17.324572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.136 [2024-07-25 01:29:17.324586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.136 qpair failed and we were unable to recover it. 00:28:55.136 [2024-07-25 01:29:17.325053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.136 [2024-07-25 01:29:17.325068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.136 qpair failed and we were unable to recover it. 00:28:55.136 [2024-07-25 01:29:17.325585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.137 [2024-07-25 01:29:17.325616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.137 qpair failed and we were unable to recover it. 00:28:55.137 [2024-07-25 01:29:17.326124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.137 [2024-07-25 01:29:17.326184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.137 qpair failed and we were unable to recover it. 00:28:55.137 [2024-07-25 01:29:17.326744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.137 [2024-07-25 01:29:17.326770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.137 qpair failed and we were unable to recover it. 00:28:55.137 [2024-07-25 01:29:17.327290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.137 [2024-07-25 01:29:17.327314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.137 qpair failed and we were unable to recover it. 00:28:55.137 [2024-07-25 01:29:17.327787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.137 [2024-07-25 01:29:17.327802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.137 qpair failed and we were unable to recover it. 00:28:55.137 [2024-07-25 01:29:17.328050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.137 [2024-07-25 01:29:17.328067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.137 qpair failed and we were unable to recover it. 00:28:55.137 [2024-07-25 01:29:17.328494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.137 [2024-07-25 01:29:17.328509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.137 qpair failed and we were unable to recover it. 00:28:55.137 [2024-07-25 01:29:17.328960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.137 [2024-07-25 01:29:17.328975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.137 qpair failed and we were unable to recover it. 00:28:55.137 [2024-07-25 01:29:17.329385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.137 [2024-07-25 01:29:17.329401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.137 qpair failed and we were unable to recover it. 00:28:55.137 [2024-07-25 01:29:17.329800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.137 [2024-07-25 01:29:17.329815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.137 qpair failed and we were unable to recover it. 00:28:55.137 [2024-07-25 01:29:17.330226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.137 [2024-07-25 01:29:17.330242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.137 qpair failed and we were unable to recover it. 00:28:55.137 [2024-07-25 01:29:17.330660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.137 [2024-07-25 01:29:17.330675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.137 qpair failed and we were unable to recover it. 00:28:55.137 [2024-07-25 01:29:17.331122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.137 [2024-07-25 01:29:17.331138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.137 qpair failed and we were unable to recover it. 00:28:55.137 [2024-07-25 01:29:17.331542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.137 [2024-07-25 01:29:17.331556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.137 qpair failed and we were unable to recover it. 00:28:55.137 [2024-07-25 01:29:17.332038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.137 [2024-07-25 01:29:17.332059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.137 qpair failed and we were unable to recover it. 00:28:55.137 [2024-07-25 01:29:17.332499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.137 [2024-07-25 01:29:17.332514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.137 qpair failed and we were unable to recover it. 00:28:55.137 [2024-07-25 01:29:17.332866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.137 [2024-07-25 01:29:17.332882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.137 qpair failed and we were unable to recover it. 00:28:55.137 [2024-07-25 01:29:17.333344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.137 [2024-07-25 01:29:17.333359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.137 qpair failed and we were unable to recover it. 00:28:55.137 [2024-07-25 01:29:17.333788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.137 [2024-07-25 01:29:17.333803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.137 qpair failed and we were unable to recover it. 00:28:55.137 [2024-07-25 01:29:17.334200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.137 [2024-07-25 01:29:17.334216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.137 qpair failed and we were unable to recover it. 00:28:55.137 [2024-07-25 01:29:17.334619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.137 [2024-07-25 01:29:17.334635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.137 qpair failed and we were unable to recover it. 00:28:55.137 [2024-07-25 01:29:17.335183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.137 [2024-07-25 01:29:17.335199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.137 qpair failed and we were unable to recover it. 00:28:55.137 [2024-07-25 01:29:17.335689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.137 [2024-07-25 01:29:17.335704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.137 qpair failed and we were unable to recover it. 00:28:55.137 [2024-07-25 01:29:17.336120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.137 [2024-07-25 01:29:17.336135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.137 qpair failed and we were unable to recover it. 00:28:55.137 [2024-07-25 01:29:17.336552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.137 [2024-07-25 01:29:17.336566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.137 qpair failed and we were unable to recover it. 00:28:55.137 [2024-07-25 01:29:17.336908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.137 [2024-07-25 01:29:17.336922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.137 qpair failed and we were unable to recover it. 00:28:55.137 [2024-07-25 01:29:17.337343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.137 [2024-07-25 01:29:17.337359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.137 qpair failed and we were unable to recover it. 00:28:55.137 [2024-07-25 01:29:17.337773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.137 [2024-07-25 01:29:17.337788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.137 qpair failed and we were unable to recover it. 00:28:55.137 [2024-07-25 01:29:17.338277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.137 [2024-07-25 01:29:17.338292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.137 qpair failed and we were unable to recover it. 00:28:55.137 [2024-07-25 01:29:17.338736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.137 [2024-07-25 01:29:17.338750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.137 qpair failed and we were unable to recover it. 00:28:55.137 [2024-07-25 01:29:17.339167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.137 [2024-07-25 01:29:17.339182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.137 qpair failed and we were unable to recover it. 00:28:55.137 [2024-07-25 01:29:17.339662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.137 [2024-07-25 01:29:17.339677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.137 qpair failed and we were unable to recover it. 00:28:55.137 [2024-07-25 01:29:17.340144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.137 [2024-07-25 01:29:17.340159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.137 qpair failed and we were unable to recover it. 00:28:55.137 [2024-07-25 01:29:17.340564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.137 [2024-07-25 01:29:17.340579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.137 qpair failed and we were unable to recover it. 00:28:55.137 [2024-07-25 01:29:17.340945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.137 [2024-07-25 01:29:17.340960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.137 qpair failed and we were unable to recover it. 00:28:55.137 [2024-07-25 01:29:17.341425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.137 [2024-07-25 01:29:17.341443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.137 qpair failed and we were unable to recover it. 00:28:55.138 [2024-07-25 01:29:17.342157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.138 [2024-07-25 01:29:17.342172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.138 qpair failed and we were unable to recover it. 00:28:55.138 [2024-07-25 01:29:17.342637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.138 [2024-07-25 01:29:17.342652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.138 qpair failed and we were unable to recover it. 00:28:55.138 [2024-07-25 01:29:17.342944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.138 [2024-07-25 01:29:17.342959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.138 qpair failed and we were unable to recover it. 00:28:55.138 [2024-07-25 01:29:17.343126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.138 [2024-07-25 01:29:17.343141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.138 qpair failed and we were unable to recover it. 00:28:55.138 [2024-07-25 01:29:17.343601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.138 [2024-07-25 01:29:17.343616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.138 qpair failed and we were unable to recover it. 00:28:55.138 [2024-07-25 01:29:17.344022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.138 [2024-07-25 01:29:17.344036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.138 qpair failed and we were unable to recover it. 00:28:55.138 [2024-07-25 01:29:17.344439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.138 [2024-07-25 01:29:17.344454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.138 qpair failed and we were unable to recover it. 00:28:55.138 [2024-07-25 01:29:17.344915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.138 [2024-07-25 01:29:17.344929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.138 qpair failed and we were unable to recover it. 00:28:55.138 [2024-07-25 01:29:17.345351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.138 [2024-07-25 01:29:17.345367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.138 qpair failed and we were unable to recover it. 00:28:55.138 [2024-07-25 01:29:17.345847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.138 [2024-07-25 01:29:17.345861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.138 qpair failed and we were unable to recover it. 00:28:55.138 [2024-07-25 01:29:17.346274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.138 [2024-07-25 01:29:17.346289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.138 qpair failed and we were unable to recover it. 00:28:55.138 [2024-07-25 01:29:17.346752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.138 [2024-07-25 01:29:17.346767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.138 qpair failed and we were unable to recover it. 00:28:55.138 [2024-07-25 01:29:17.347214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.138 [2024-07-25 01:29:17.347229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.138 qpair failed and we were unable to recover it. 00:28:55.138 [2024-07-25 01:29:17.347583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.138 [2024-07-25 01:29:17.347597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.138 qpair failed and we were unable to recover it. 00:28:55.138 [2024-07-25 01:29:17.348000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.138 [2024-07-25 01:29:17.348015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.138 qpair failed and we were unable to recover it. 00:28:55.138 [2024-07-25 01:29:17.348482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.138 [2024-07-25 01:29:17.348498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.138 qpair failed and we were unable to recover it. 00:28:55.138 [2024-07-25 01:29:17.348919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.138 [2024-07-25 01:29:17.348933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.138 qpair failed and we were unable to recover it. 00:28:55.138 [2024-07-25 01:29:17.349275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.138 [2024-07-25 01:29:17.349289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.138 qpair failed and we were unable to recover it. 00:28:55.138 [2024-07-25 01:29:17.349649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.138 [2024-07-25 01:29:17.349664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.138 qpair failed and we were unable to recover it. 00:28:55.138 [2024-07-25 01:29:17.349954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.138 [2024-07-25 01:29:17.349968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.138 qpair failed and we were unable to recover it. 00:28:55.138 [2024-07-25 01:29:17.350340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.138 [2024-07-25 01:29:17.350356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.138 qpair failed and we were unable to recover it. 00:28:55.138 [2024-07-25 01:29:17.350717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.138 [2024-07-25 01:29:17.350731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.138 qpair failed and we were unable to recover it. 00:28:55.138 [2024-07-25 01:29:17.351191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.138 [2024-07-25 01:29:17.351207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.138 qpair failed and we were unable to recover it. 00:28:55.138 [2024-07-25 01:29:17.351569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.138 [2024-07-25 01:29:17.351584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.138 qpair failed and we were unable to recover it. 00:28:55.138 [2024-07-25 01:29:17.352069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.138 [2024-07-25 01:29:17.352088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.138 qpair failed and we were unable to recover it. 00:28:55.138 [2024-07-25 01:29:17.352553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.138 [2024-07-25 01:29:17.352585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.138 qpair failed and we were unable to recover it. 00:28:55.138 [2024-07-25 01:29:17.353035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.138 [2024-07-25 01:29:17.353094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.138 qpair failed and we were unable to recover it. 00:28:55.138 [2024-07-25 01:29:17.353598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.138 [2024-07-25 01:29:17.353629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.138 qpair failed and we were unable to recover it. 00:28:55.138 [2024-07-25 01:29:17.354076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.138 [2024-07-25 01:29:17.354108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.138 qpair failed and we were unable to recover it. 00:28:55.138 [2024-07-25 01:29:17.354608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.138 [2024-07-25 01:29:17.354638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.138 qpair failed and we were unable to recover it. 00:28:55.138 [2024-07-25 01:29:17.355161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.138 [2024-07-25 01:29:17.355192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.138 qpair failed and we were unable to recover it. 00:28:55.138 [2024-07-25 01:29:17.355658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.138 [2024-07-25 01:29:17.355688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.138 qpair failed and we were unable to recover it. 00:28:55.138 [2024-07-25 01:29:17.356137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.138 [2024-07-25 01:29:17.356168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.138 qpair failed and we were unable to recover it. 00:28:55.138 [2024-07-25 01:29:17.356619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.138 [2024-07-25 01:29:17.356650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.138 qpair failed and we were unable to recover it. 00:28:55.138 [2024-07-25 01:29:17.357163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.138 [2024-07-25 01:29:17.357195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.138 qpair failed and we were unable to recover it. 00:28:55.138 [2024-07-25 01:29:17.357572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.138 [2024-07-25 01:29:17.357602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.139 qpair failed and we were unable to recover it. 00:28:55.139 [2024-07-25 01:29:17.358079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.139 [2024-07-25 01:29:17.358111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.139 qpair failed and we were unable to recover it. 00:28:55.139 [2024-07-25 01:29:17.358581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.139 [2024-07-25 01:29:17.358612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.139 qpair failed and we were unable to recover it. 00:28:55.139 [2024-07-25 01:29:17.359103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.139 [2024-07-25 01:29:17.359118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.139 qpair failed and we were unable to recover it. 00:28:55.139 [2024-07-25 01:29:17.359502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.139 [2024-07-25 01:29:17.359519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.139 qpair failed and we were unable to recover it. 00:28:55.139 [2024-07-25 01:29:17.359954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.139 [2024-07-25 01:29:17.359968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.139 qpair failed and we were unable to recover it. 00:28:55.139 [2024-07-25 01:29:17.360457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.139 [2024-07-25 01:29:17.360488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.139 qpair failed and we were unable to recover it. 00:28:55.139 [2024-07-25 01:29:17.360929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.139 [2024-07-25 01:29:17.360959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.139 qpair failed and we were unable to recover it. 00:28:55.139 [2024-07-25 01:29:17.361461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.139 [2024-07-25 01:29:17.361477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.139 qpair failed and we were unable to recover it. 00:28:55.139 [2024-07-25 01:29:17.361841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.139 [2024-07-25 01:29:17.361872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.139 qpair failed and we were unable to recover it. 00:28:55.139 [2024-07-25 01:29:17.362269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.139 [2024-07-25 01:29:17.362301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.139 qpair failed and we were unable to recover it. 00:28:55.139 [2024-07-25 01:29:17.362823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.139 [2024-07-25 01:29:17.362852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.139 qpair failed and we were unable to recover it. 00:28:55.139 [2024-07-25 01:29:17.363228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.139 [2024-07-25 01:29:17.363260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.139 qpair failed and we were unable to recover it. 00:28:55.139 [2024-07-25 01:29:17.363721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.139 [2024-07-25 01:29:17.363751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.139 qpair failed and we were unable to recover it. 00:28:55.139 [2024-07-25 01:29:17.364199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.139 [2024-07-25 01:29:17.364231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.139 qpair failed and we were unable to recover it. 00:28:55.139 [2024-07-25 01:29:17.364662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.139 [2024-07-25 01:29:17.364692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.139 qpair failed and we were unable to recover it. 00:28:55.139 [2024-07-25 01:29:17.365094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.139 [2024-07-25 01:29:17.365125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.139 qpair failed and we were unable to recover it. 00:28:55.139 [2024-07-25 01:29:17.365371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.139 [2024-07-25 01:29:17.365401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.139 qpair failed and we were unable to recover it. 00:28:55.139 [2024-07-25 01:29:17.365832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.139 [2024-07-25 01:29:17.365863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.139 qpair failed and we were unable to recover it. 00:28:55.139 [2024-07-25 01:29:17.366383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.139 [2024-07-25 01:29:17.366415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.139 qpair failed and we were unable to recover it. 00:28:55.139 [2024-07-25 01:29:17.366937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.139 [2024-07-25 01:29:17.366967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.139 qpair failed and we were unable to recover it. 00:28:55.139 [2024-07-25 01:29:17.367353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.139 [2024-07-25 01:29:17.367386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.139 qpair failed and we were unable to recover it. 00:28:55.139 [2024-07-25 01:29:17.367891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.139 [2024-07-25 01:29:17.367922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.139 qpair failed and we were unable to recover it. 00:28:55.139 [2024-07-25 01:29:17.368420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.139 [2024-07-25 01:29:17.368451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.139 qpair failed and we were unable to recover it. 00:28:55.139 [2024-07-25 01:29:17.368903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.139 [2024-07-25 01:29:17.368934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.139 qpair failed and we were unable to recover it. 00:28:55.139 [2024-07-25 01:29:17.369373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.139 [2024-07-25 01:29:17.369406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.139 qpair failed and we were unable to recover it. 00:28:55.139 [2024-07-25 01:29:17.369866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.139 [2024-07-25 01:29:17.369881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.139 qpair failed and we were unable to recover it. 00:28:55.139 [2024-07-25 01:29:17.370302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.139 [2024-07-25 01:29:17.370334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.139 qpair failed and we were unable to recover it. 00:28:55.139 [2024-07-25 01:29:17.370736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.139 [2024-07-25 01:29:17.370768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.139 qpair failed and we were unable to recover it. 00:28:55.139 [2024-07-25 01:29:17.371167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.139 [2024-07-25 01:29:17.371183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.139 qpair failed and we were unable to recover it. 00:28:55.139 [2024-07-25 01:29:17.371581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.139 [2024-07-25 01:29:17.371595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.139 qpair failed and we were unable to recover it. 00:28:55.139 [2024-07-25 01:29:17.372088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.139 [2024-07-25 01:29:17.372120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.139 qpair failed and we were unable to recover it. 00:28:55.139 [2024-07-25 01:29:17.372617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.139 [2024-07-25 01:29:17.372647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.139 qpair failed and we were unable to recover it. 00:28:55.139 [2024-07-25 01:29:17.373090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.139 [2024-07-25 01:29:17.373122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.139 qpair failed and we were unable to recover it. 00:28:55.139 [2024-07-25 01:29:17.373621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.139 [2024-07-25 01:29:17.373651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.139 qpair failed and we were unable to recover it. 00:28:55.139 [2024-07-25 01:29:17.374134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.139 [2024-07-25 01:29:17.374165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.139 qpair failed and we were unable to recover it. 00:28:55.139 [2024-07-25 01:29:17.374644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.139 [2024-07-25 01:29:17.374674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.139 qpair failed and we were unable to recover it. 00:28:55.140 [2024-07-25 01:29:17.375167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.140 [2024-07-25 01:29:17.375185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.140 qpair failed and we were unable to recover it. 00:28:55.140 [2024-07-25 01:29:17.375342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.140 [2024-07-25 01:29:17.375358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.140 qpair failed and we were unable to recover it. 00:28:55.140 [2024-07-25 01:29:17.375763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.140 [2024-07-25 01:29:17.375778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.140 qpair failed and we were unable to recover it. 00:28:55.140 [2024-07-25 01:29:17.376266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.140 [2024-07-25 01:29:17.376282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.140 qpair failed and we were unable to recover it. 00:28:55.140 [2024-07-25 01:29:17.376717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.140 [2024-07-25 01:29:17.376732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.140 qpair failed and we were unable to recover it. 00:28:55.140 [2024-07-25 01:29:17.377149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.140 [2024-07-25 01:29:17.377164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.140 qpair failed and we were unable to recover it. 00:28:55.140 [2024-07-25 01:29:17.377632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.140 [2024-07-25 01:29:17.377664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.140 qpair failed and we were unable to recover it. 00:28:55.140 [2024-07-25 01:29:17.378094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.140 [2024-07-25 01:29:17.378133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.140 qpair failed and we were unable to recover it. 00:28:55.140 [2024-07-25 01:29:17.378655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.140 [2024-07-25 01:29:17.378686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.140 qpair failed and we were unable to recover it. 00:28:55.140 [2024-07-25 01:29:17.379087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.140 [2024-07-25 01:29:17.379119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.140 qpair failed and we were unable to recover it. 00:28:55.140 [2024-07-25 01:29:17.379632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.140 [2024-07-25 01:29:17.379662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.140 qpair failed and we were unable to recover it. 00:28:55.140 [2024-07-25 01:29:17.380155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.140 [2024-07-25 01:29:17.380171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.140 qpair failed and we were unable to recover it. 00:28:55.140 [2024-07-25 01:29:17.380516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.140 [2024-07-25 01:29:17.380532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.140 qpair failed and we were unable to recover it. 00:28:55.140 [2024-07-25 01:29:17.380886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.140 [2024-07-25 01:29:17.380901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.140 qpair failed and we were unable to recover it. 00:28:55.140 [2024-07-25 01:29:17.381335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.140 [2024-07-25 01:29:17.381367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.140 qpair failed and we were unable to recover it. 00:28:55.140 [2024-07-25 01:29:17.381874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.140 [2024-07-25 01:29:17.381905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.140 qpair failed and we were unable to recover it. 00:28:55.140 [2024-07-25 01:29:17.382404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.140 [2024-07-25 01:29:17.382436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.140 qpair failed and we were unable to recover it. 00:28:55.140 [2024-07-25 01:29:17.382956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.140 [2024-07-25 01:29:17.382986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.140 qpair failed and we were unable to recover it. 00:28:55.140 [2024-07-25 01:29:17.383453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.140 [2024-07-25 01:29:17.383468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.140 qpair failed and we were unable to recover it. 00:28:55.140 [2024-07-25 01:29:17.383924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.140 [2024-07-25 01:29:17.383954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.140 qpair failed and we were unable to recover it. 00:28:55.140 [2024-07-25 01:29:17.384422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.140 [2024-07-25 01:29:17.384454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.140 qpair failed and we were unable to recover it. 00:28:55.140 [2024-07-25 01:29:17.384899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.140 [2024-07-25 01:29:17.384930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.140 qpair failed and we were unable to recover it. 00:28:55.140 [2024-07-25 01:29:17.385372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.140 [2024-07-25 01:29:17.385403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.140 qpair failed and we were unable to recover it. 00:28:55.140 [2024-07-25 01:29:17.385925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.140 [2024-07-25 01:29:17.385956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.140 qpair failed and we were unable to recover it. 00:28:55.140 [2024-07-25 01:29:17.386221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.140 [2024-07-25 01:29:17.386254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.140 qpair failed and we were unable to recover it. 00:28:55.140 [2024-07-25 01:29:17.386756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.140 [2024-07-25 01:29:17.386786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.140 qpair failed and we were unable to recover it. 00:28:55.140 [2024-07-25 01:29:17.387285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.140 [2024-07-25 01:29:17.387317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.140 qpair failed and we were unable to recover it. 00:28:55.140 [2024-07-25 01:29:17.387769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.140 [2024-07-25 01:29:17.387800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.140 qpair failed and we were unable to recover it. 00:28:55.140 [2024-07-25 01:29:17.388299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.140 [2024-07-25 01:29:17.388315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.140 qpair failed and we were unable to recover it. 00:28:55.140 [2024-07-25 01:29:17.388817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.140 [2024-07-25 01:29:17.388832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.140 qpair failed and we were unable to recover it. 00:28:55.140 [2024-07-25 01:29:17.388989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.140 [2024-07-25 01:29:17.389005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.140 qpair failed and we were unable to recover it. 00:28:55.140 [2024-07-25 01:29:17.389412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.140 [2024-07-25 01:29:17.389443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.140 qpair failed and we were unable to recover it. 00:28:55.140 [2024-07-25 01:29:17.389940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.140 [2024-07-25 01:29:17.389970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.140 qpair failed and we were unable to recover it. 00:28:55.140 [2024-07-25 01:29:17.390417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.140 [2024-07-25 01:29:17.390433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.140 qpair failed and we were unable to recover it. 00:28:55.140 [2024-07-25 01:29:17.390898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.140 [2024-07-25 01:29:17.390929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.140 qpair failed and we were unable to recover it. 00:28:55.140 [2024-07-25 01:29:17.391376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.140 [2024-07-25 01:29:17.391409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.140 qpair failed and we were unable to recover it. 00:28:55.140 [2024-07-25 01:29:17.391866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.140 [2024-07-25 01:29:17.391897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.140 qpair failed and we were unable to recover it. 00:28:55.141 [2024-07-25 01:29:17.392276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.141 [2024-07-25 01:29:17.392308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.141 qpair failed and we were unable to recover it. 00:28:55.141 [2024-07-25 01:29:17.392807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.141 [2024-07-25 01:29:17.392838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.141 qpair failed and we were unable to recover it. 00:28:55.141 [2024-07-25 01:29:17.393357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.141 [2024-07-25 01:29:17.393389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.141 qpair failed and we were unable to recover it. 00:28:55.141 [2024-07-25 01:29:17.393846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.141 [2024-07-25 01:29:17.393860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.141 qpair failed and we were unable to recover it. 00:28:55.141 [2024-07-25 01:29:17.394341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.141 [2024-07-25 01:29:17.394373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.141 qpair failed and we were unable to recover it. 00:28:55.141 [2024-07-25 01:29:17.394802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.141 [2024-07-25 01:29:17.394817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.141 qpair failed and we were unable to recover it. 00:28:55.141 [2024-07-25 01:29:17.395260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.141 [2024-07-25 01:29:17.395291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.141 qpair failed and we were unable to recover it. 00:28:55.141 [2024-07-25 01:29:17.395736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.141 [2024-07-25 01:29:17.395766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.141 qpair failed and we were unable to recover it. 00:28:55.141 [2024-07-25 01:29:17.396202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.141 [2024-07-25 01:29:17.396234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.141 qpair failed and we were unable to recover it. 00:28:55.141 [2024-07-25 01:29:17.396733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.141 [2024-07-25 01:29:17.396763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.141 qpair failed and we were unable to recover it. 00:28:55.141 [2024-07-25 01:29:17.397260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.141 [2024-07-25 01:29:17.397291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.141 qpair failed and we were unable to recover it. 00:28:55.141 [2024-07-25 01:29:17.397745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.141 [2024-07-25 01:29:17.397775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.141 qpair failed and we were unable to recover it. 00:28:55.141 [2024-07-25 01:29:17.398271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.141 [2024-07-25 01:29:17.398287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.141 qpair failed and we were unable to recover it. 00:28:55.141 [2024-07-25 01:29:17.398482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.141 [2024-07-25 01:29:17.398513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.141 qpair failed and we were unable to recover it. 00:28:55.141 [2024-07-25 01:29:17.399029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.141 [2024-07-25 01:29:17.399080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.141 qpair failed and we were unable to recover it. 00:28:55.141 [2024-07-25 01:29:17.399596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.141 [2024-07-25 01:29:17.399610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.141 qpair failed and we were unable to recover it. 00:28:55.141 [2024-07-25 01:29:17.400073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.141 [2024-07-25 01:29:17.400105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.141 qpair failed and we were unable to recover it. 00:28:55.141 [2024-07-25 01:29:17.400568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.141 [2024-07-25 01:29:17.400600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.141 qpair failed and we were unable to recover it. 00:28:55.141 [2024-07-25 01:29:17.400981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.141 [2024-07-25 01:29:17.401012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.141 qpair failed and we were unable to recover it. 00:28:55.141 [2024-07-25 01:29:17.401415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.141 [2024-07-25 01:29:17.401447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.141 qpair failed and we were unable to recover it. 00:28:55.141 [2024-07-25 01:29:17.401968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.141 [2024-07-25 01:29:17.401999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.141 qpair failed and we were unable to recover it. 00:28:55.141 [2024-07-25 01:29:17.402476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.141 [2024-07-25 01:29:17.402508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.141 qpair failed and we were unable to recover it. 00:28:55.141 [2024-07-25 01:29:17.402945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.141 [2024-07-25 01:29:17.402960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.141 qpair failed and we were unable to recover it. 00:28:55.141 [2024-07-25 01:29:17.403469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.141 [2024-07-25 01:29:17.403485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.141 qpair failed and we were unable to recover it. 00:28:55.141 [2024-07-25 01:29:17.403924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.141 [2024-07-25 01:29:17.403955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.141 qpair failed and we were unable to recover it. 00:28:55.141 [2024-07-25 01:29:17.404496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.141 [2024-07-25 01:29:17.404527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.141 qpair failed and we were unable to recover it. 00:28:55.141 [2024-07-25 01:29:17.404978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.141 [2024-07-25 01:29:17.405009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.141 qpair failed and we were unable to recover it. 00:28:55.141 [2024-07-25 01:29:17.405540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.141 [2024-07-25 01:29:17.405572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.141 qpair failed and we were unable to recover it. 00:28:55.141 [2024-07-25 01:29:17.406112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.141 [2024-07-25 01:29:17.406145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.141 qpair failed and we were unable to recover it. 00:28:55.141 [2024-07-25 01:29:17.406594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.141 [2024-07-25 01:29:17.406625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.142 qpair failed and we were unable to recover it. 00:28:55.142 [2024-07-25 01:29:17.407090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.142 [2024-07-25 01:29:17.407105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.142 qpair failed and we were unable to recover it. 00:28:55.142 [2024-07-25 01:29:17.407568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.142 [2024-07-25 01:29:17.407583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.142 qpair failed and we were unable to recover it. 00:28:55.142 [2024-07-25 01:29:17.408017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.142 [2024-07-25 01:29:17.408055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.142 qpair failed and we were unable to recover it. 00:28:55.142 [2024-07-25 01:29:17.408496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.142 [2024-07-25 01:29:17.408511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.142 qpair failed and we were unable to recover it. 00:28:55.142 [2024-07-25 01:29:17.408974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.142 [2024-07-25 01:29:17.409005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.142 qpair failed and we were unable to recover it. 00:28:55.142 [2024-07-25 01:29:17.409461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.142 [2024-07-25 01:29:17.409493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.142 qpair failed and we were unable to recover it. 00:28:55.142 [2024-07-25 01:29:17.409891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.142 [2024-07-25 01:29:17.409922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.142 qpair failed and we were unable to recover it. 00:28:55.142 [2024-07-25 01:29:17.410252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.142 [2024-07-25 01:29:17.410270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.142 qpair failed and we were unable to recover it. 00:28:55.142 [2024-07-25 01:29:17.410722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.142 [2024-07-25 01:29:17.410737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.142 qpair failed and we were unable to recover it. 00:28:55.142 [2024-07-25 01:29:17.411150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.142 [2024-07-25 01:29:17.411166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.142 qpair failed and we were unable to recover it. 00:28:55.142 [2024-07-25 01:29:17.411656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.142 [2024-07-25 01:29:17.411687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.142 qpair failed and we were unable to recover it. 00:28:55.142 [2024-07-25 01:29:17.412195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.142 [2024-07-25 01:29:17.412227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.142 qpair failed and we were unable to recover it. 00:28:55.142 [2024-07-25 01:29:17.412698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.142 [2024-07-25 01:29:17.412729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.142 qpair failed and we were unable to recover it. 00:28:55.142 [2024-07-25 01:29:17.413251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.142 [2024-07-25 01:29:17.413282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.142 qpair failed and we were unable to recover it. 00:28:55.142 [2024-07-25 01:29:17.413727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.142 [2024-07-25 01:29:17.413757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.142 qpair failed and we were unable to recover it. 00:28:55.142 [2024-07-25 01:29:17.414147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.142 [2024-07-25 01:29:17.414177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.142 qpair failed and we were unable to recover it. 00:28:55.142 [2024-07-25 01:29:17.414684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.142 [2024-07-25 01:29:17.414715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.142 qpair failed and we were unable to recover it. 00:28:55.142 [2024-07-25 01:29:17.415162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.142 [2024-07-25 01:29:17.415193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.142 qpair failed and we were unable to recover it. 00:28:55.142 [2024-07-25 01:29:17.415632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.142 [2024-07-25 01:29:17.415663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.142 qpair failed and we were unable to recover it. 00:28:55.142 [2024-07-25 01:29:17.416105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.142 [2024-07-25 01:29:17.416136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.142 qpair failed and we were unable to recover it. 00:28:55.142 [2024-07-25 01:29:17.416314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.142 [2024-07-25 01:29:17.416346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.142 qpair failed and we were unable to recover it. 00:28:55.142 [2024-07-25 01:29:17.416793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.142 [2024-07-25 01:29:17.416825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.142 qpair failed and we were unable to recover it. 00:28:55.142 [2024-07-25 01:29:17.417261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.142 [2024-07-25 01:29:17.417293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.142 qpair failed and we were unable to recover it. 00:28:55.142 [2024-07-25 01:29:17.417731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.142 [2024-07-25 01:29:17.417773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.142 qpair failed and we were unable to recover it. 00:28:55.142 [2024-07-25 01:29:17.418188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.142 [2024-07-25 01:29:17.418220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.142 qpair failed and we were unable to recover it. 00:28:55.142 [2024-07-25 01:29:17.418809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.142 [2024-07-25 01:29:17.418839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.142 qpair failed and we were unable to recover it. 00:28:55.142 [2024-07-25 01:29:17.419344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.142 [2024-07-25 01:29:17.419376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.142 qpair failed and we were unable to recover it. 00:28:55.142 [2024-07-25 01:29:17.419874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.142 [2024-07-25 01:29:17.419904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.142 qpair failed and we were unable to recover it. 00:28:55.142 [2024-07-25 01:29:17.420271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.142 [2024-07-25 01:29:17.420303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.142 qpair failed and we were unable to recover it. 00:28:55.142 [2024-07-25 01:29:17.420744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.142 [2024-07-25 01:29:17.420758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.142 qpair failed and we were unable to recover it. 00:28:55.142 [2024-07-25 01:29:17.421203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.142 [2024-07-25 01:29:17.421218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.142 qpair failed and we were unable to recover it. 00:28:55.142 [2024-07-25 01:29:17.421624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.142 [2024-07-25 01:29:17.421639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.142 qpair failed and we were unable to recover it. 00:28:55.142 [2024-07-25 01:29:17.422126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.142 [2024-07-25 01:29:17.422158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.142 qpair failed and we were unable to recover it. 00:28:55.142 [2024-07-25 01:29:17.422653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.142 [2024-07-25 01:29:17.422684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.142 qpair failed and we were unable to recover it. 00:28:55.142 [2024-07-25 01:29:17.423081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.142 [2024-07-25 01:29:17.423113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.142 qpair failed and we were unable to recover it. 00:28:55.142 [2024-07-25 01:29:17.423627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.142 [2024-07-25 01:29:17.423658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.142 qpair failed and we were unable to recover it. 00:28:55.142 [2024-07-25 01:29:17.424140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.142 [2024-07-25 01:29:17.424181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.142 qpair failed and we were unable to recover it. 00:28:55.142 [2024-07-25 01:29:17.424675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.143 [2024-07-25 01:29:17.424705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.143 qpair failed and we were unable to recover it. 00:28:55.143 [2024-07-25 01:29:17.425153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.143 [2024-07-25 01:29:17.425169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.143 qpair failed and we were unable to recover it. 00:28:55.143 [2024-07-25 01:29:17.425531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.143 [2024-07-25 01:29:17.425562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.143 qpair failed and we were unable to recover it. 00:28:55.143 [2024-07-25 01:29:17.426006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.143 [2024-07-25 01:29:17.426037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.143 qpair failed and we were unable to recover it. 00:28:55.143 [2024-07-25 01:29:17.426487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.143 [2024-07-25 01:29:17.426502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.143 qpair failed and we were unable to recover it. 00:28:55.143 [2024-07-25 01:29:17.426911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.143 [2024-07-25 01:29:17.426942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.143 qpair failed and we were unable to recover it. 00:28:55.143 [2024-07-25 01:29:17.427380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.143 [2024-07-25 01:29:17.427395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.143 qpair failed and we were unable to recover it. 00:28:55.143 [2024-07-25 01:29:17.427751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.143 [2024-07-25 01:29:17.427766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.143 qpair failed and we were unable to recover it. 00:28:55.143 [2024-07-25 01:29:17.428178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.143 [2024-07-25 01:29:17.428210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.143 qpair failed and we were unable to recover it. 00:28:55.143 [2024-07-25 01:29:17.428731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.143 [2024-07-25 01:29:17.428762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.143 qpair failed and we were unable to recover it. 00:28:55.143 [2024-07-25 01:29:17.429203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.143 [2024-07-25 01:29:17.429241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.143 qpair failed and we were unable to recover it. 00:28:55.143 [2024-07-25 01:29:17.429761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.143 [2024-07-25 01:29:17.429792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.143 qpair failed and we were unable to recover it. 00:28:55.143 [2024-07-25 01:29:17.430225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.143 [2024-07-25 01:29:17.430239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.143 qpair failed and we were unable to recover it. 00:28:55.143 [2024-07-25 01:29:17.430651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.143 [2024-07-25 01:29:17.430681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.143 qpair failed and we were unable to recover it. 00:28:55.143 [2024-07-25 01:29:17.431111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.143 [2024-07-25 01:29:17.431144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.143 qpair failed and we were unable to recover it. 00:28:55.143 [2024-07-25 01:29:17.431650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.143 [2024-07-25 01:29:17.431680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.143 qpair failed and we were unable to recover it. 00:28:55.143 [2024-07-25 01:29:17.432154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.143 [2024-07-25 01:29:17.432185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.143 qpair failed and we were unable to recover it. 00:28:55.143 [2024-07-25 01:29:17.432655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.143 [2024-07-25 01:29:17.432685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.143 qpair failed and we were unable to recover it. 00:28:55.143 [2024-07-25 01:29:17.433175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.143 [2024-07-25 01:29:17.433191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.143 qpair failed and we were unable to recover it. 00:28:55.143 [2024-07-25 01:29:17.433618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.143 [2024-07-25 01:29:17.433649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.143 qpair failed and we were unable to recover it. 00:28:55.143 [2024-07-25 01:29:17.434115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.143 [2024-07-25 01:29:17.434146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.143 qpair failed and we were unable to recover it. 00:28:55.143 [2024-07-25 01:29:17.434600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.143 [2024-07-25 01:29:17.434630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.143 qpair failed and we were unable to recover it. 00:28:55.143 [2024-07-25 01:29:17.435125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.143 [2024-07-25 01:29:17.435157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.143 qpair failed and we were unable to recover it. 00:28:55.143 [2024-07-25 01:29:17.435678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.143 [2024-07-25 01:29:17.435709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.143 qpair failed and we were unable to recover it. 00:28:55.143 [2024-07-25 01:29:17.436238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.143 [2024-07-25 01:29:17.436270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.143 qpair failed and we were unable to recover it. 00:28:55.143 [2024-07-25 01:29:17.436713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.143 [2024-07-25 01:29:17.436744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.143 qpair failed and we were unable to recover it. 00:28:55.143 [2024-07-25 01:29:17.437262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.143 [2024-07-25 01:29:17.437294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.143 qpair failed and we were unable to recover it. 00:28:55.143 [2024-07-25 01:29:17.437790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.143 [2024-07-25 01:29:17.437821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.143 qpair failed and we were unable to recover it. 00:28:55.143 [2024-07-25 01:29:17.438355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.143 [2024-07-25 01:29:17.438387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.143 qpair failed and we were unable to recover it. 00:28:55.143 [2024-07-25 01:29:17.438855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.143 [2024-07-25 01:29:17.438885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.143 qpair failed and we were unable to recover it. 00:28:55.143 [2024-07-25 01:29:17.439374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.143 [2024-07-25 01:29:17.439390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.143 qpair failed and we were unable to recover it. 00:28:55.143 [2024-07-25 01:29:17.439875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.143 [2024-07-25 01:29:17.439890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.143 qpair failed and we were unable to recover it. 00:28:55.143 [2024-07-25 01:29:17.440406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.143 [2024-07-25 01:29:17.440438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.143 qpair failed and we were unable to recover it. 00:28:55.143 [2024-07-25 01:29:17.440818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.143 [2024-07-25 01:29:17.440848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.143 qpair failed and we were unable to recover it. 00:28:55.143 [2024-07-25 01:29:17.441293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.143 [2024-07-25 01:29:17.441324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.143 qpair failed and we were unable to recover it. 00:28:55.143 [2024-07-25 01:29:17.441764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.143 [2024-07-25 01:29:17.441795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.143 qpair failed and we were unable to recover it. 00:28:55.143 [2024-07-25 01:29:17.442235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.143 [2024-07-25 01:29:17.442267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.143 qpair failed and we were unable to recover it. 00:28:55.143 [2024-07-25 01:29:17.442668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.143 [2024-07-25 01:29:17.442699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.143 qpair failed and we were unable to recover it. 00:28:55.144 [2024-07-25 01:29:17.442948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.144 [2024-07-25 01:29:17.442979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.144 qpair failed and we were unable to recover it. 00:28:55.144 [2024-07-25 01:29:17.443432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.144 [2024-07-25 01:29:17.443464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.144 qpair failed and we were unable to recover it. 00:28:55.144 [2024-07-25 01:29:17.443962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.144 [2024-07-25 01:29:17.443992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.144 qpair failed and we were unable to recover it. 00:28:55.144 [2024-07-25 01:29:17.444422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.144 [2024-07-25 01:29:17.444437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.144 qpair failed and we were unable to recover it. 00:28:55.144 [2024-07-25 01:29:17.444856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.144 [2024-07-25 01:29:17.444886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.144 qpair failed and we were unable to recover it. 00:28:55.144 [2024-07-25 01:29:17.445268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.144 [2024-07-25 01:29:17.445284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.144 qpair failed and we were unable to recover it. 00:28:55.144 [2024-07-25 01:29:17.445748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.144 [2024-07-25 01:29:17.445778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.144 qpair failed and we were unable to recover it. 00:28:55.144 [2024-07-25 01:29:17.446171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.144 [2024-07-25 01:29:17.446202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.144 qpair failed and we were unable to recover it. 00:28:55.144 [2024-07-25 01:29:17.446720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.144 [2024-07-25 01:29:17.446752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.144 qpair failed and we were unable to recover it. 00:28:55.144 [2024-07-25 01:29:17.447180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.144 [2024-07-25 01:29:17.447211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.144 qpair failed and we were unable to recover it. 00:28:55.144 [2024-07-25 01:29:17.447735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.144 [2024-07-25 01:29:17.447766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.144 qpair failed and we were unable to recover it. 00:28:55.144 [2024-07-25 01:29:17.448284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.144 [2024-07-25 01:29:17.448315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.144 qpair failed and we were unable to recover it. 00:28:55.144 [2024-07-25 01:29:17.448782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.144 [2024-07-25 01:29:17.448818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.144 qpair failed and we were unable to recover it. 00:28:55.144 [2024-07-25 01:29:17.449342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.144 [2024-07-25 01:29:17.449373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.144 qpair failed and we were unable to recover it. 00:28:55.144 [2024-07-25 01:29:17.449885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.144 [2024-07-25 01:29:17.449915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.144 qpair failed and we were unable to recover it. 00:28:55.144 [2024-07-25 01:29:17.450363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.144 [2024-07-25 01:29:17.450394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.144 qpair failed and we were unable to recover it. 00:28:55.144 [2024-07-25 01:29:17.450919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.144 [2024-07-25 01:29:17.450949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.144 qpair failed and we were unable to recover it. 00:28:55.144 [2024-07-25 01:29:17.451422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.144 [2024-07-25 01:29:17.451454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.144 qpair failed and we were unable to recover it. 00:28:55.144 [2024-07-25 01:29:17.451901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.144 [2024-07-25 01:29:17.451932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.144 qpair failed and we were unable to recover it. 00:28:55.144 [2024-07-25 01:29:17.452316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.144 [2024-07-25 01:29:17.452331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.144 qpair failed and we were unable to recover it. 00:28:55.144 [2024-07-25 01:29:17.452792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.144 [2024-07-25 01:29:17.452807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.144 qpair failed and we were unable to recover it. 00:28:55.144 [2024-07-25 01:29:17.453242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.144 [2024-07-25 01:29:17.453275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.144 qpair failed and we were unable to recover it. 00:28:55.144 [2024-07-25 01:29:17.453772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.144 [2024-07-25 01:29:17.453802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.144 qpair failed and we were unable to recover it. 00:28:55.144 [2024-07-25 01:29:17.454249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.144 [2024-07-25 01:29:17.454280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.144 qpair failed and we were unable to recover it. 00:28:55.144 [2024-07-25 01:29:17.454727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.144 [2024-07-25 01:29:17.454757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.144 qpair failed and we were unable to recover it. 00:28:55.144 [2024-07-25 01:29:17.455276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.144 [2024-07-25 01:29:17.455308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.144 qpair failed and we were unable to recover it. 00:28:55.144 [2024-07-25 01:29:17.455709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.144 [2024-07-25 01:29:17.455740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.144 qpair failed and we were unable to recover it. 00:28:55.144 [2024-07-25 01:29:17.456236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.144 [2024-07-25 01:29:17.456268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.144 qpair failed and we were unable to recover it. 00:28:55.144 [2024-07-25 01:29:17.456663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.144 [2024-07-25 01:29:17.456693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.144 qpair failed and we were unable to recover it. 00:28:55.144 [2024-07-25 01:29:17.457188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.144 [2024-07-25 01:29:17.457220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.144 qpair failed and we were unable to recover it. 00:28:55.144 [2024-07-25 01:29:17.457742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.144 [2024-07-25 01:29:17.457772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.144 qpair failed and we were unable to recover it. 00:28:55.144 [2024-07-25 01:29:17.458217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.144 [2024-07-25 01:29:17.458249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.144 qpair failed and we were unable to recover it. 00:28:55.144 [2024-07-25 01:29:17.458623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.144 [2024-07-25 01:29:17.458652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.144 qpair failed and we were unable to recover it. 00:28:55.144 [2024-07-25 01:29:17.459088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.144 [2024-07-25 01:29:17.459120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.144 qpair failed and we were unable to recover it. 00:28:55.144 [2024-07-25 01:29:17.459558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.144 [2024-07-25 01:29:17.459588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.144 qpair failed and we were unable to recover it. 00:28:55.144 [2024-07-25 01:29:17.460109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.144 [2024-07-25 01:29:17.460142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.144 qpair failed and we were unable to recover it. 00:28:55.144 [2024-07-25 01:29:17.460619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.144 [2024-07-25 01:29:17.460649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.144 qpair failed and we were unable to recover it. 00:28:55.144 [2024-07-25 01:29:17.461151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.144 [2024-07-25 01:29:17.461182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.144 qpair failed and we were unable to recover it. 00:28:55.145 [2024-07-25 01:29:17.461630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.145 [2024-07-25 01:29:17.461661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.145 qpair failed and we were unable to recover it. 00:28:55.145 [2024-07-25 01:29:17.462180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.145 [2024-07-25 01:29:17.462196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.145 qpair failed and we were unable to recover it. 00:28:55.145 [2024-07-25 01:29:17.462607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.145 [2024-07-25 01:29:17.462637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.145 qpair failed and we were unable to recover it. 00:28:55.145 [2024-07-25 01:29:17.463138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.145 [2024-07-25 01:29:17.463170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.145 qpair failed and we were unable to recover it. 00:28:55.145 [2024-07-25 01:29:17.463640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.145 [2024-07-25 01:29:17.463671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.145 qpair failed and we were unable to recover it. 00:28:55.145 [2024-07-25 01:29:17.464194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.145 [2024-07-25 01:29:17.464226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.145 qpair failed and we were unable to recover it. 00:28:55.145 [2024-07-25 01:29:17.464676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.145 [2024-07-25 01:29:17.464706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.145 qpair failed and we were unable to recover it. 00:28:55.145 [2024-07-25 01:29:17.465145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.145 [2024-07-25 01:29:17.465177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.145 qpair failed and we were unable to recover it. 00:28:55.145 [2024-07-25 01:29:17.465378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.145 [2024-07-25 01:29:17.465408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.145 qpair failed and we were unable to recover it. 00:28:55.145 [2024-07-25 01:29:17.465925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.145 [2024-07-25 01:29:17.465956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.145 qpair failed and we were unable to recover it. 00:28:55.145 [2024-07-25 01:29:17.466459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.145 [2024-07-25 01:29:17.466490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.145 qpair failed and we were unable to recover it. 00:28:55.145 [2024-07-25 01:29:17.466957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.145 [2024-07-25 01:29:17.466987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.145 qpair failed and we were unable to recover it. 00:28:55.145 [2024-07-25 01:29:17.467508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.145 [2024-07-25 01:29:17.467540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.145 qpair failed and we were unable to recover it. 00:28:55.145 [2024-07-25 01:29:17.467969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.145 [2024-07-25 01:29:17.468000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.145 qpair failed and we were unable to recover it. 00:28:55.145 [2024-07-25 01:29:17.468463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.145 [2024-07-25 01:29:17.468481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.145 qpair failed and we were unable to recover it. 00:28:55.145 [2024-07-25 01:29:17.468924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.145 [2024-07-25 01:29:17.468939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.145 qpair failed and we were unable to recover it. 00:28:55.145 [2024-07-25 01:29:17.469351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.145 [2024-07-25 01:29:17.469383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.145 qpair failed and we were unable to recover it. 00:28:55.145 [2024-07-25 01:29:17.469827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.145 [2024-07-25 01:29:17.469858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.145 qpair failed and we were unable to recover it. 00:28:55.145 [2024-07-25 01:29:17.470315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.145 [2024-07-25 01:29:17.470347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.145 qpair failed and we were unable to recover it. 00:28:55.145 [2024-07-25 01:29:17.470797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.145 [2024-07-25 01:29:17.470828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.145 qpair failed and we were unable to recover it. 00:28:55.145 [2024-07-25 01:29:17.471331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.145 [2024-07-25 01:29:17.471362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.145 qpair failed and we were unable to recover it. 00:28:55.145 [2024-07-25 01:29:17.471823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.145 [2024-07-25 01:29:17.471853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.145 qpair failed and we were unable to recover it. 00:28:55.145 [2024-07-25 01:29:17.472284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.145 [2024-07-25 01:29:17.472315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.145 qpair failed and we were unable to recover it. 00:28:55.145 [2024-07-25 01:29:17.472840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.145 [2024-07-25 01:29:17.472871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.145 qpair failed and we were unable to recover it. 00:28:55.145 [2024-07-25 01:29:17.473259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.145 [2024-07-25 01:29:17.473291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.145 qpair failed and we were unable to recover it. 00:28:55.145 [2024-07-25 01:29:17.473791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.145 [2024-07-25 01:29:17.473822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.145 qpair failed and we were unable to recover it. 00:28:55.145 [2024-07-25 01:29:17.474317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.145 [2024-07-25 01:29:17.474349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.145 qpair failed and we were unable to recover it. 00:28:55.145 [2024-07-25 01:29:17.474848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.145 [2024-07-25 01:29:17.474879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.145 qpair failed and we were unable to recover it. 00:28:55.145 [2024-07-25 01:29:17.475339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.145 [2024-07-25 01:29:17.475372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.145 qpair failed and we were unable to recover it. 00:28:55.145 [2024-07-25 01:29:17.475893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.145 [2024-07-25 01:29:17.475923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.145 qpair failed and we were unable to recover it. 00:28:55.145 [2024-07-25 01:29:17.476436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.145 [2024-07-25 01:29:17.476468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.145 qpair failed and we were unable to recover it. 00:28:55.145 [2024-07-25 01:29:17.476714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.145 [2024-07-25 01:29:17.476744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.145 qpair failed and we were unable to recover it. 00:28:55.145 [2024-07-25 01:29:17.477231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.145 [2024-07-25 01:29:17.477262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.146 qpair failed and we were unable to recover it. 00:28:55.146 [2024-07-25 01:29:17.477757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.146 [2024-07-25 01:29:17.477788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.146 qpair failed and we were unable to recover it. 00:28:55.146 [2024-07-25 01:29:17.478214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.146 [2024-07-25 01:29:17.478230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.146 qpair failed and we were unable to recover it. 00:28:55.146 [2024-07-25 01:29:17.478641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.146 [2024-07-25 01:29:17.478672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.146 qpair failed and we were unable to recover it. 00:28:55.146 [2024-07-25 01:29:17.479115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.146 [2024-07-25 01:29:17.479147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.146 qpair failed and we were unable to recover it. 00:28:55.146 [2024-07-25 01:29:17.479619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.146 [2024-07-25 01:29:17.479650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.146 qpair failed and we were unable to recover it. 00:28:55.146 [2024-07-25 01:29:17.480100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.146 [2024-07-25 01:29:17.480132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.146 qpair failed and we were unable to recover it. 00:28:55.146 [2024-07-25 01:29:17.480660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.146 [2024-07-25 01:29:17.480691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.146 qpair failed and we were unable to recover it. 00:28:55.146 [2024-07-25 01:29:17.481159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.146 [2024-07-25 01:29:17.481191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.146 qpair failed and we were unable to recover it. 00:28:55.146 [2024-07-25 01:29:17.481727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.146 [2024-07-25 01:29:17.481758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.146 qpair failed and we were unable to recover it. 00:28:55.146 [2024-07-25 01:29:17.482208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.146 [2024-07-25 01:29:17.482240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.146 qpair failed and we were unable to recover it. 00:28:55.146 [2024-07-25 01:29:17.482762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.146 [2024-07-25 01:29:17.482793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.146 qpair failed and we were unable to recover it. 00:28:55.146 [2024-07-25 01:29:17.483313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.146 [2024-07-25 01:29:17.483353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.146 qpair failed and we were unable to recover it. 00:28:55.146 [2024-07-25 01:29:17.483819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.146 [2024-07-25 01:29:17.483833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.146 qpair failed and we were unable to recover it. 00:28:55.146 [2024-07-25 01:29:17.484300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.146 [2024-07-25 01:29:17.484332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.146 qpair failed and we were unable to recover it. 00:28:55.146 [2024-07-25 01:29:17.484773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.146 [2024-07-25 01:29:17.484804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.146 qpair failed and we were unable to recover it. 00:28:55.146 [2024-07-25 01:29:17.485323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.146 [2024-07-25 01:29:17.485356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.146 qpair failed and we were unable to recover it. 00:28:55.146 [2024-07-25 01:29:17.485804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.146 [2024-07-25 01:29:17.485835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.146 qpair failed and we were unable to recover it. 00:28:55.146 [2024-07-25 01:29:17.486285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.146 [2024-07-25 01:29:17.486317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.146 qpair failed and we were unable to recover it. 00:28:55.146 [2024-07-25 01:29:17.486829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.146 [2024-07-25 01:29:17.486860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.146 qpair failed and we were unable to recover it. 00:28:55.146 [2024-07-25 01:29:17.487320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.146 [2024-07-25 01:29:17.487351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.146 qpair failed and we were unable to recover it. 00:28:55.146 [2024-07-25 01:29:17.487827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.146 [2024-07-25 01:29:17.487857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.146 qpair failed and we were unable to recover it. 00:28:55.146 [2024-07-25 01:29:17.488325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.146 [2024-07-25 01:29:17.488362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.146 qpair failed and we were unable to recover it. 00:28:55.146 [2024-07-25 01:29:17.488883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.146 [2024-07-25 01:29:17.488914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.146 qpair failed and we were unable to recover it. 00:28:55.146 [2024-07-25 01:29:17.489347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.146 [2024-07-25 01:29:17.489379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.146 qpair failed and we were unable to recover it. 00:28:55.146 [2024-07-25 01:29:17.489581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.146 [2024-07-25 01:29:17.489611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.146 qpair failed and we were unable to recover it. 00:28:55.146 [2024-07-25 01:29:17.490107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.146 [2024-07-25 01:29:17.490138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.146 qpair failed and we were unable to recover it. 00:28:55.146 [2024-07-25 01:29:17.490670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.146 [2024-07-25 01:29:17.490701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.146 qpair failed and we were unable to recover it. 00:28:55.146 [2024-07-25 01:29:17.491241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.146 [2024-07-25 01:29:17.491256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.146 qpair failed and we were unable to recover it. 00:28:55.146 [2024-07-25 01:29:17.491674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.146 [2024-07-25 01:29:17.491706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.146 qpair failed and we were unable to recover it. 00:28:55.146 [2024-07-25 01:29:17.492203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.146 [2024-07-25 01:29:17.492234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.146 qpair failed and we were unable to recover it. 00:28:55.146 [2024-07-25 01:29:17.492705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.146 [2024-07-25 01:29:17.492736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.146 qpair failed and we were unable to recover it. 00:28:55.146 [2024-07-25 01:29:17.493282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.146 [2024-07-25 01:29:17.493315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.146 qpair failed and we were unable to recover it. 00:28:55.146 [2024-07-25 01:29:17.493763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.146 [2024-07-25 01:29:17.493793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.146 qpair failed and we were unable to recover it. 00:28:55.146 [2024-07-25 01:29:17.494311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.146 [2024-07-25 01:29:17.494342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.146 qpair failed and we were unable to recover it. 00:28:55.146 [2024-07-25 01:29:17.494861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.146 [2024-07-25 01:29:17.494891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.146 qpair failed and we were unable to recover it. 00:28:55.146 [2024-07-25 01:29:17.495420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.146 [2024-07-25 01:29:17.495451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.146 qpair failed and we were unable to recover it. 00:28:55.147 [2024-07-25 01:29:17.495890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.147 [2024-07-25 01:29:17.495927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.147 qpair failed and we were unable to recover it. 00:28:55.147 [2024-07-25 01:29:17.496390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.147 [2024-07-25 01:29:17.496422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.147 qpair failed and we were unable to recover it. 00:28:55.147 [2024-07-25 01:29:17.496867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.147 [2024-07-25 01:29:17.496898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.147 qpair failed and we were unable to recover it. 00:28:55.147 [2024-07-25 01:29:17.497395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.147 [2024-07-25 01:29:17.497426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.147 qpair failed and we were unable to recover it. 00:28:55.147 [2024-07-25 01:29:17.497666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.147 [2024-07-25 01:29:17.497697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.147 qpair failed and we were unable to recover it. 00:28:55.147 [2024-07-25 01:29:17.498191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.147 [2024-07-25 01:29:17.498223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.147 qpair failed and we were unable to recover it. 00:28:55.147 [2024-07-25 01:29:17.498691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.147 [2024-07-25 01:29:17.498722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.147 qpair failed and we were unable to recover it. 00:28:55.147 [2024-07-25 01:29:17.499168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.147 [2024-07-25 01:29:17.499200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.147 qpair failed and we were unable to recover it. 00:28:55.147 [2024-07-25 01:29:17.499700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.147 [2024-07-25 01:29:17.499714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.147 qpair failed and we were unable to recover it. 00:28:55.147 [2024-07-25 01:29:17.500204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.147 [2024-07-25 01:29:17.500235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.147 qpair failed and we were unable to recover it. 00:28:55.147 [2024-07-25 01:29:17.500706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.147 [2024-07-25 01:29:17.500737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.147 qpair failed and we were unable to recover it. 00:28:55.147 [2024-07-25 01:29:17.501182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.147 [2024-07-25 01:29:17.501214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.147 qpair failed and we were unable to recover it. 00:28:55.147 [2024-07-25 01:29:17.501663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.147 [2024-07-25 01:29:17.501694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.147 qpair failed and we were unable to recover it. 00:28:55.147 [2024-07-25 01:29:17.502158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.147 [2024-07-25 01:29:17.502201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.147 qpair failed and we were unable to recover it. 00:28:55.147 [2024-07-25 01:29:17.502685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.147 [2024-07-25 01:29:17.502700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.147 qpair failed and we were unable to recover it. 00:28:55.147 [2024-07-25 01:29:17.503099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.147 [2024-07-25 01:29:17.503131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.147 qpair failed and we were unable to recover it. 00:28:55.147 [2024-07-25 01:29:17.503600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.147 [2024-07-25 01:29:17.503630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.147 qpair failed and we were unable to recover it. 00:28:55.147 [2024-07-25 01:29:17.504078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.147 [2024-07-25 01:29:17.504110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.147 qpair failed and we were unable to recover it. 00:28:55.147 [2024-07-25 01:29:17.504607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.147 [2024-07-25 01:29:17.504637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.147 qpair failed and we were unable to recover it. 00:28:55.147 [2024-07-25 01:29:17.505112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.147 [2024-07-25 01:29:17.505144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.147 qpair failed and we were unable to recover it. 00:28:55.147 [2024-07-25 01:29:17.505599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.147 [2024-07-25 01:29:17.505630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.147 qpair failed and we were unable to recover it. 00:28:55.147 [2024-07-25 01:29:17.505828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.147 [2024-07-25 01:29:17.505860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.147 qpair failed and we were unable to recover it. 00:28:55.147 [2024-07-25 01:29:17.506401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.147 [2024-07-25 01:29:17.506433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.147 qpair failed and we were unable to recover it. 00:28:55.147 [2024-07-25 01:29:17.506911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.147 [2024-07-25 01:29:17.506941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.147 qpair failed and we were unable to recover it. 00:28:55.147 [2024-07-25 01:29:17.507409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.147 [2024-07-25 01:29:17.507440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.147 qpair failed and we were unable to recover it. 00:28:55.147 [2024-07-25 01:29:17.507892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.147 [2024-07-25 01:29:17.507928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.147 qpair failed and we were unable to recover it. 00:28:55.147 [2024-07-25 01:29:17.508365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.147 [2024-07-25 01:29:17.508397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.147 qpair failed and we were unable to recover it. 00:28:55.147 [2024-07-25 01:29:17.508839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.147 [2024-07-25 01:29:17.508869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.147 qpair failed and we were unable to recover it. 00:28:55.147 [2024-07-25 01:29:17.509317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.147 [2024-07-25 01:29:17.509354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.147 qpair failed and we were unable to recover it. 00:28:55.147 [2024-07-25 01:29:17.509774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.147 [2024-07-25 01:29:17.509805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.147 qpair failed and we were unable to recover it. 00:28:55.147 [2024-07-25 01:29:17.510350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.147 [2024-07-25 01:29:17.510382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.147 qpair failed and we were unable to recover it. 00:28:55.147 [2024-07-25 01:29:17.510820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.147 [2024-07-25 01:29:17.510850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.147 qpair failed and we were unable to recover it. 00:28:55.147 [2024-07-25 01:29:17.511383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.147 [2024-07-25 01:29:17.511415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.147 qpair failed and we were unable to recover it. 00:28:55.147 [2024-07-25 01:29:17.511856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.147 [2024-07-25 01:29:17.511887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.147 qpair failed and we were unable to recover it. 00:28:55.147 [2024-07-25 01:29:17.512336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.147 [2024-07-25 01:29:17.512367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.147 qpair failed and we were unable to recover it. 00:28:55.147 [2024-07-25 01:29:17.512763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.147 [2024-07-25 01:29:17.512794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.147 qpair failed and we were unable to recover it. 00:28:55.147 [2024-07-25 01:29:17.513253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.147 [2024-07-25 01:29:17.513285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.147 qpair failed and we were unable to recover it. 00:28:55.147 [2024-07-25 01:29:17.513728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.147 [2024-07-25 01:29:17.513758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.147 qpair failed and we were unable to recover it. 00:28:55.148 [2024-07-25 01:29:17.514201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.148 [2024-07-25 01:29:17.514233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.148 qpair failed and we were unable to recover it. 00:28:55.148 [2024-07-25 01:29:17.514505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.148 [2024-07-25 01:29:17.514520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.148 qpair failed and we were unable to recover it. 00:28:55.148 [2024-07-25 01:29:17.514952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.148 [2024-07-25 01:29:17.514982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.148 qpair failed and we were unable to recover it. 00:28:55.148 [2024-07-25 01:29:17.515431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.148 [2024-07-25 01:29:17.515463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.148 qpair failed and we were unable to recover it. 00:28:55.148 [2024-07-25 01:29:17.515988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.148 [2024-07-25 01:29:17.516019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.148 qpair failed and we were unable to recover it. 00:28:55.148 [2024-07-25 01:29:17.516564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.148 [2024-07-25 01:29:17.516595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.148 qpair failed and we were unable to recover it. 00:28:55.148 [2024-07-25 01:29:17.516988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.148 [2024-07-25 01:29:17.517019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.148 qpair failed and we were unable to recover it. 00:28:55.148 [2024-07-25 01:29:17.517533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.148 [2024-07-25 01:29:17.517565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.148 qpair failed and we were unable to recover it. 00:28:55.148 [2024-07-25 01:29:17.517944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.148 [2024-07-25 01:29:17.517975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.148 qpair failed and we were unable to recover it. 00:28:55.148 [2024-07-25 01:29:17.518429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.148 [2024-07-25 01:29:17.518467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.148 qpair failed and we were unable to recover it. 00:28:55.148 [2024-07-25 01:29:17.518958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.148 [2024-07-25 01:29:17.518988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.148 qpair failed and we were unable to recover it. 00:28:55.148 [2024-07-25 01:29:17.519437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.148 [2024-07-25 01:29:17.519468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.148 qpair failed and we were unable to recover it. 00:28:55.148 [2024-07-25 01:29:17.519866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.148 [2024-07-25 01:29:17.519896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.148 qpair failed and we were unable to recover it. 00:28:55.148 [2024-07-25 01:29:17.520279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.148 [2024-07-25 01:29:17.520310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.148 qpair failed and we were unable to recover it. 00:28:55.148 [2024-07-25 01:29:17.520768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.148 [2024-07-25 01:29:17.520799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.148 qpair failed and we were unable to recover it. 00:28:55.148 [2024-07-25 01:29:17.521319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.148 [2024-07-25 01:29:17.521351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.148 qpair failed and we were unable to recover it. 00:28:55.148 [2024-07-25 01:29:17.521849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.148 [2024-07-25 01:29:17.521880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.148 qpair failed and we were unable to recover it. 00:28:55.148 [2024-07-25 01:29:17.522406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.148 [2024-07-25 01:29:17.522438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.148 qpair failed and we were unable to recover it. 00:28:55.148 [2024-07-25 01:29:17.522879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.148 [2024-07-25 01:29:17.522909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.148 qpair failed and we were unable to recover it. 00:28:55.148 [2024-07-25 01:29:17.523289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.148 [2024-07-25 01:29:17.523321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.148 qpair failed and we were unable to recover it. 00:28:55.148 [2024-07-25 01:29:17.523820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.148 [2024-07-25 01:29:17.523851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.148 qpair failed and we were unable to recover it. 00:28:55.148 [2024-07-25 01:29:17.524300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.148 [2024-07-25 01:29:17.524332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.148 qpair failed and we were unable to recover it. 00:28:55.148 [2024-07-25 01:29:17.524818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.148 [2024-07-25 01:29:17.524833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.148 qpair failed and we were unable to recover it. 00:28:55.148 [2024-07-25 01:29:17.525238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.148 [2024-07-25 01:29:17.525269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.148 qpair failed and we were unable to recover it. 00:28:55.148 [2024-07-25 01:29:17.525705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.148 [2024-07-25 01:29:17.525735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.148 qpair failed and we were unable to recover it. 00:28:55.148 [2024-07-25 01:29:17.526172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.148 [2024-07-25 01:29:17.526187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.148 qpair failed and we were unable to recover it. 00:28:55.148 [2024-07-25 01:29:17.526599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.148 [2024-07-25 01:29:17.526629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.148 qpair failed and we were unable to recover it. 00:28:55.148 [2024-07-25 01:29:17.527059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.148 [2024-07-25 01:29:17.527096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.148 qpair failed and we were unable to recover it. 00:28:55.148 [2024-07-25 01:29:17.527612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.148 [2024-07-25 01:29:17.527627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.148 qpair failed and we were unable to recover it. 00:28:55.148 [2024-07-25 01:29:17.527970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.148 [2024-07-25 01:29:17.527985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.148 qpair failed and we were unable to recover it. 00:28:55.148 [2024-07-25 01:29:17.528453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.148 [2024-07-25 01:29:17.528484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.148 qpair failed and we were unable to recover it. 00:28:55.148 [2024-07-25 01:29:17.528943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.148 [2024-07-25 01:29:17.528975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.148 qpair failed and we were unable to recover it. 00:28:55.148 [2024-07-25 01:29:17.529403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.148 [2024-07-25 01:29:17.529420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.148 qpair failed and we were unable to recover it. 00:28:55.148 [2024-07-25 01:29:17.529858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.148 [2024-07-25 01:29:17.529872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.148 qpair failed and we were unable to recover it. 00:28:55.148 [2024-07-25 01:29:17.530320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.148 [2024-07-25 01:29:17.530352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.148 qpair failed and we were unable to recover it. 00:28:55.148 [2024-07-25 01:29:17.530853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.148 [2024-07-25 01:29:17.530884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.148 qpair failed and we were unable to recover it. 00:28:55.148 [2024-07-25 01:29:17.531314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.148 [2024-07-25 01:29:17.531345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.148 qpair failed and we were unable to recover it. 00:28:55.149 [2024-07-25 01:29:17.531814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.149 [2024-07-25 01:29:17.531847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.149 qpair failed and we were unable to recover it. 00:28:55.149 [2024-07-25 01:29:17.532382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.149 [2024-07-25 01:29:17.532414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.149 qpair failed and we were unable to recover it. 00:28:55.149 [2024-07-25 01:29:17.532809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.149 [2024-07-25 01:29:17.532840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.149 qpair failed and we were unable to recover it. 00:28:55.149 [2024-07-25 01:29:17.533364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.149 [2024-07-25 01:29:17.533396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.149 qpair failed and we were unable to recover it. 00:28:55.149 [2024-07-25 01:29:17.533852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.149 [2024-07-25 01:29:17.533883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.149 qpair failed and we were unable to recover it. 00:28:55.149 [2024-07-25 01:29:17.534293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.149 [2024-07-25 01:29:17.534330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.149 qpair failed and we were unable to recover it. 00:28:55.149 [2024-07-25 01:29:17.534648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.149 [2024-07-25 01:29:17.534663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.149 qpair failed and we were unable to recover it. 00:28:55.149 [2024-07-25 01:29:17.535075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.149 [2024-07-25 01:29:17.535107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.149 qpair failed and we were unable to recover it. 00:28:55.149 [2024-07-25 01:29:17.535560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.149 [2024-07-25 01:29:17.535591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.149 qpair failed and we were unable to recover it. 00:28:55.149 [2024-07-25 01:29:17.536023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.149 [2024-07-25 01:29:17.536062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.149 qpair failed and we were unable to recover it. 00:28:55.149 [2024-07-25 01:29:17.536582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.149 [2024-07-25 01:29:17.536613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.149 qpair failed and we were unable to recover it. 00:28:55.149 [2024-07-25 01:29:17.537029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.149 [2024-07-25 01:29:17.537069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.149 qpair failed and we were unable to recover it. 00:28:55.149 [2024-07-25 01:29:17.537566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.149 [2024-07-25 01:29:17.537598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.149 qpair failed and we were unable to recover it. 00:28:55.149 [2024-07-25 01:29:17.538065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.149 [2024-07-25 01:29:17.538097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.149 qpair failed and we were unable to recover it. 00:28:55.149 [2024-07-25 01:29:17.538556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.149 [2024-07-25 01:29:17.538588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.149 qpair failed and we were unable to recover it. 00:28:55.149 [2024-07-25 01:29:17.539085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.149 [2024-07-25 01:29:17.539117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.149 qpair failed and we were unable to recover it. 00:28:55.149 [2024-07-25 01:29:17.539573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.149 [2024-07-25 01:29:17.539605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.149 qpair failed and we were unable to recover it. 00:28:55.149 [2024-07-25 01:29:17.540065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.149 [2024-07-25 01:29:17.540098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.149 qpair failed and we were unable to recover it. 00:28:55.149 [2024-07-25 01:29:17.540644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.149 [2024-07-25 01:29:17.540675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.149 qpair failed and we were unable to recover it. 00:28:55.149 [2024-07-25 01:29:17.541078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.149 [2024-07-25 01:29:17.541110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.149 qpair failed and we were unable to recover it. 00:28:55.149 [2024-07-25 01:29:17.541635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.149 [2024-07-25 01:29:17.541666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.149 qpair failed and we were unable to recover it. 00:28:55.149 [2024-07-25 01:29:17.542116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.149 [2024-07-25 01:29:17.542148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.149 qpair failed and we were unable to recover it. 00:28:55.149 [2024-07-25 01:29:17.542648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.149 [2024-07-25 01:29:17.542678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.149 qpair failed and we were unable to recover it. 00:28:55.149 [2024-07-25 01:29:17.543075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.149 [2024-07-25 01:29:17.543106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.149 qpair failed and we were unable to recover it. 00:28:55.149 [2024-07-25 01:29:17.543481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.149 [2024-07-25 01:29:17.543511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.149 qpair failed and we were unable to recover it. 00:28:55.149 [2024-07-25 01:29:17.543979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.149 [2024-07-25 01:29:17.544010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.149 qpair failed and we were unable to recover it. 00:28:55.149 [2024-07-25 01:29:17.544404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.149 [2024-07-25 01:29:17.544435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.149 qpair failed and we were unable to recover it. 00:28:55.149 [2024-07-25 01:29:17.544879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.149 [2024-07-25 01:29:17.544910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.149 qpair failed and we were unable to recover it. 00:28:55.149 [2024-07-25 01:29:17.545354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.149 [2024-07-25 01:29:17.545386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.149 qpair failed and we were unable to recover it. 00:28:55.149 [2024-07-25 01:29:17.545837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.149 [2024-07-25 01:29:17.545868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.149 qpair failed and we were unable to recover it. 00:28:55.149 [2024-07-25 01:29:17.546315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.149 [2024-07-25 01:29:17.546353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.149 qpair failed and we were unable to recover it. 00:28:55.149 [2024-07-25 01:29:17.546797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.149 [2024-07-25 01:29:17.546827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.149 qpair failed and we were unable to recover it. 00:28:55.149 [2024-07-25 01:29:17.547372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.149 [2024-07-25 01:29:17.547403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.149 qpair failed and we were unable to recover it. 00:28:55.149 [2024-07-25 01:29:17.547851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.149 [2024-07-25 01:29:17.547881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.149 qpair failed and we were unable to recover it. 00:28:55.149 [2024-07-25 01:29:17.548305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.149 [2024-07-25 01:29:17.548321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.149 qpair failed and we were unable to recover it. 00:28:55.149 [2024-07-25 01:29:17.548742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.149 [2024-07-25 01:29:17.548758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.149 qpair failed and we were unable to recover it. 00:28:55.149 [2024-07-25 01:29:17.549116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.149 [2024-07-25 01:29:17.549148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.149 qpair failed and we were unable to recover it. 00:28:55.149 [2024-07-25 01:29:17.549594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.149 [2024-07-25 01:29:17.549625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.150 qpair failed and we were unable to recover it. 00:28:55.150 [2024-07-25 01:29:17.549990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.150 [2024-07-25 01:29:17.550020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.150 qpair failed and we were unable to recover it. 00:28:55.150 [2024-07-25 01:29:17.550484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.150 [2024-07-25 01:29:17.550516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.150 qpair failed and we were unable to recover it. 00:28:55.150 [2024-07-25 01:29:17.551022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.150 [2024-07-25 01:29:17.551065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.150 qpair failed and we were unable to recover it. 00:28:55.150 [2024-07-25 01:29:17.551448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.150 [2024-07-25 01:29:17.551478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.150 qpair failed and we were unable to recover it. 00:28:55.150 [2024-07-25 01:29:17.551868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.150 [2024-07-25 01:29:17.551898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.150 qpair failed and we were unable to recover it. 00:28:55.150 [2024-07-25 01:29:17.552362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.150 [2024-07-25 01:29:17.552394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.150 qpair failed and we were unable to recover it. 00:28:55.150 [2024-07-25 01:29:17.552752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.150 [2024-07-25 01:29:17.552784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.150 qpair failed and we were unable to recover it. 00:28:55.150 [2024-07-25 01:29:17.553222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.150 [2024-07-25 01:29:17.553254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.150 qpair failed and we were unable to recover it. 00:28:55.150 [2024-07-25 01:29:17.553693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.150 [2024-07-25 01:29:17.553724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.150 qpair failed and we were unable to recover it. 00:28:55.150 [2024-07-25 01:29:17.554167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.150 [2024-07-25 01:29:17.554182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.150 qpair failed and we were unable to recover it. 00:28:55.150 [2024-07-25 01:29:17.554609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.150 [2024-07-25 01:29:17.554640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.150 qpair failed and we were unable to recover it. 00:28:55.150 [2024-07-25 01:29:17.555080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.150 [2024-07-25 01:29:17.555113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.150 qpair failed and we were unable to recover it. 00:28:55.150 [2024-07-25 01:29:17.555574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.150 [2024-07-25 01:29:17.555604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.150 qpair failed and we were unable to recover it. 00:28:55.150 [2024-07-25 01:29:17.556054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.150 [2024-07-25 01:29:17.556087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.150 qpair failed and we were unable to recover it. 00:28:55.150 [2024-07-25 01:29:17.556611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.150 [2024-07-25 01:29:17.556642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.150 qpair failed and we were unable to recover it. 00:28:55.150 [2024-07-25 01:29:17.557159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.150 [2024-07-25 01:29:17.557191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.150 qpair failed and we were unable to recover it. 00:28:55.150 [2024-07-25 01:29:17.557477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.150 [2024-07-25 01:29:17.557507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.150 qpair failed and we were unable to recover it. 00:28:55.150 [2024-07-25 01:29:17.557897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.150 [2024-07-25 01:29:17.557927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.150 qpair failed and we were unable to recover it. 00:28:55.150 [2024-07-25 01:29:17.558425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.150 [2024-07-25 01:29:17.558457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.150 qpair failed and we were unable to recover it. 00:28:55.150 [2024-07-25 01:29:17.558984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.150 [2024-07-25 01:29:17.559015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.150 qpair failed and we were unable to recover it. 00:28:55.150 [2024-07-25 01:29:17.559522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.150 [2024-07-25 01:29:17.559555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.150 qpair failed and we were unable to recover it. 00:28:55.150 [2024-07-25 01:29:17.560010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.150 [2024-07-25 01:29:17.560041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.150 qpair failed and we were unable to recover it. 00:28:55.150 [2024-07-25 01:29:17.560505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.150 [2024-07-25 01:29:17.560536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.150 qpair failed and we were unable to recover it. 00:28:55.150 [2024-07-25 01:29:17.561078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.150 [2024-07-25 01:29:17.561110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.150 qpair failed and we were unable to recover it. 00:28:55.150 [2024-07-25 01:29:17.561564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.150 [2024-07-25 01:29:17.561595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.150 qpair failed and we were unable to recover it. 00:28:55.150 [2024-07-25 01:29:17.562069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.150 [2024-07-25 01:29:17.562101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.150 qpair failed and we were unable to recover it. 00:28:55.150 [2024-07-25 01:29:17.562622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.150 [2024-07-25 01:29:17.562653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.150 qpair failed and we were unable to recover it. 00:28:55.150 [2024-07-25 01:29:17.563097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.150 [2024-07-25 01:29:17.563128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.150 qpair failed and we were unable to recover it. 00:28:55.150 [2024-07-25 01:29:17.563626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.150 [2024-07-25 01:29:17.563656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.150 qpair failed and we were unable to recover it. 00:28:55.150 [2024-07-25 01:29:17.564155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.150 [2024-07-25 01:29:17.564187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.150 qpair failed and we were unable to recover it. 00:28:55.150 [2024-07-25 01:29:17.564438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.150 [2024-07-25 01:29:17.564480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.150 qpair failed and we were unable to recover it. 00:28:55.150 [2024-07-25 01:29:17.564893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.150 [2024-07-25 01:29:17.564924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.151 qpair failed and we were unable to recover it. 00:28:55.151 [2024-07-25 01:29:17.565440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.151 [2024-07-25 01:29:17.565476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.151 qpair failed and we were unable to recover it. 00:28:55.151 [2024-07-25 01:29:17.565932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.151 [2024-07-25 01:29:17.565962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.151 qpair failed and we were unable to recover it. 00:28:55.151 [2024-07-25 01:29:17.566367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.151 [2024-07-25 01:29:17.566382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.151 qpair failed and we were unable to recover it. 00:28:55.151 [2024-07-25 01:29:17.566744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.151 [2024-07-25 01:29:17.566774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.151 qpair failed and we were unable to recover it. 00:28:55.151 [2024-07-25 01:29:17.567178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.151 [2024-07-25 01:29:17.567214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.151 qpair failed and we were unable to recover it. 00:28:55.151 [2024-07-25 01:29:17.567679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.151 [2024-07-25 01:29:17.567709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.151 qpair failed and we were unable to recover it. 00:28:55.151 [2024-07-25 01:29:17.568106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.151 [2024-07-25 01:29:17.568138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.151 qpair failed and we were unable to recover it. 00:28:55.151 [2024-07-25 01:29:17.568647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.151 [2024-07-25 01:29:17.568662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.151 qpair failed and we were unable to recover it. 00:28:55.151 [2024-07-25 01:29:17.569125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.151 [2024-07-25 01:29:17.569141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.151 qpair failed and we were unable to recover it. 00:28:55.151 [2024-07-25 01:29:17.569425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.151 [2024-07-25 01:29:17.569440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.151 qpair failed and we were unable to recover it. 00:28:55.151 [2024-07-25 01:29:17.569903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.151 [2024-07-25 01:29:17.569918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.151 qpair failed and we were unable to recover it. 00:28:55.151 [2024-07-25 01:29:17.570404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.151 [2024-07-25 01:29:17.570420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.151 qpair failed and we were unable to recover it. 00:28:55.151 [2024-07-25 01:29:17.570770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.151 [2024-07-25 01:29:17.570786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.151 qpair failed and we were unable to recover it. 00:28:55.151 [2024-07-25 01:29:17.571128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.151 [2024-07-25 01:29:17.571144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.151 qpair failed and we were unable to recover it. 00:28:55.151 [2024-07-25 01:29:17.571609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.151 [2024-07-25 01:29:17.571625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.151 qpair failed and we were unable to recover it. 00:28:55.151 [2024-07-25 01:29:17.572028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.151 [2024-07-25 01:29:17.572056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.151 qpair failed and we were unable to recover it. 00:28:55.151 [2024-07-25 01:29:17.572484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.151 [2024-07-25 01:29:17.572499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.151 qpair failed and we were unable to recover it. 00:28:55.151 [2024-07-25 01:29:17.572853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.151 [2024-07-25 01:29:17.572868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.151 qpair failed and we were unable to recover it. 00:28:55.151 [2024-07-25 01:29:17.573282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.151 [2024-07-25 01:29:17.573298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.151 qpair failed and we were unable to recover it. 00:28:55.151 [2024-07-25 01:29:17.573699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.151 [2024-07-25 01:29:17.573729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.151 qpair failed and we were unable to recover it. 00:28:55.151 [2024-07-25 01:29:17.574127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.151 [2024-07-25 01:29:17.574160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.151 qpair failed and we were unable to recover it. 00:28:55.151 [2024-07-25 01:29:17.574619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.151 [2024-07-25 01:29:17.574649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.151 qpair failed and we were unable to recover it. 00:28:55.151 [2024-07-25 01:29:17.575128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.151 [2024-07-25 01:29:17.575163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.151 qpair failed and we were unable to recover it. 00:28:55.151 [2024-07-25 01:29:17.575575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.151 [2024-07-25 01:29:17.575590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.151 qpair failed and we were unable to recover it. 00:28:55.151 [2024-07-25 01:29:17.575961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.151 [2024-07-25 01:29:17.575992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.151 qpair failed and we were unable to recover it. 00:28:55.151 [2024-07-25 01:29:17.576453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.151 [2024-07-25 01:29:17.576487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.151 qpair failed and we were unable to recover it. 00:28:55.151 [2024-07-25 01:29:17.576926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.151 [2024-07-25 01:29:17.576941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.151 qpair failed and we were unable to recover it. 00:28:55.151 [2024-07-25 01:29:17.577390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.151 [2024-07-25 01:29:17.577422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.151 qpair failed and we were unable to recover it. 00:28:55.151 [2024-07-25 01:29:17.577906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.151 [2024-07-25 01:29:17.577936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.151 qpair failed and we were unable to recover it. 00:28:55.151 [2024-07-25 01:29:17.578340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.151 [2024-07-25 01:29:17.578373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.151 qpair failed and we were unable to recover it. 00:28:55.151 [2024-07-25 01:29:17.578764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.151 [2024-07-25 01:29:17.578794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.151 qpair failed and we were unable to recover it. 00:28:55.151 [2024-07-25 01:29:17.579298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.151 [2024-07-25 01:29:17.579329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.151 qpair failed and we were unable to recover it. 00:28:55.151 [2024-07-25 01:29:17.579945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.151 [2024-07-25 01:29:17.579975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.151 qpair failed and we were unable to recover it. 00:28:55.151 [2024-07-25 01:29:17.580357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.151 [2024-07-25 01:29:17.580389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.151 qpair failed and we were unable to recover it. 00:28:55.151 [2024-07-25 01:29:17.580780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.151 [2024-07-25 01:29:17.580795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.151 qpair failed and we were unable to recover it. 00:28:55.151 [2024-07-25 01:29:17.581018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.151 [2024-07-25 01:29:17.581056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.151 qpair failed and we were unable to recover it. 00:28:55.151 [2024-07-25 01:29:17.581446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.151 [2024-07-25 01:29:17.581477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.151 qpair failed and we were unable to recover it. 00:28:55.151 [2024-07-25 01:29:17.581857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.152 [2024-07-25 01:29:17.581889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.152 qpair failed and we were unable to recover it. 00:28:55.152 [2024-07-25 01:29:17.582279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.152 [2024-07-25 01:29:17.582311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.152 qpair failed and we were unable to recover it. 00:28:55.152 [2024-07-25 01:29:17.582699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.152 [2024-07-25 01:29:17.582729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.152 qpair failed and we were unable to recover it. 00:28:55.152 [2024-07-25 01:29:17.583248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.152 [2024-07-25 01:29:17.583266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.152 qpair failed and we were unable to recover it. 00:28:55.152 [2024-07-25 01:29:17.583738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.152 [2024-07-25 01:29:17.583769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.152 qpair failed and we were unable to recover it. 00:28:55.152 [2024-07-25 01:29:17.584226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.152 [2024-07-25 01:29:17.584259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.152 qpair failed and we were unable to recover it. 00:28:55.152 [2024-07-25 01:29:17.584703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.152 [2024-07-25 01:29:17.584717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.152 qpair failed and we were unable to recover it. 00:28:55.152 [2024-07-25 01:29:17.585119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.152 [2024-07-25 01:29:17.585152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.152 qpair failed and we were unable to recover it. 00:28:55.152 [2024-07-25 01:29:17.585539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.152 [2024-07-25 01:29:17.585570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.152 qpair failed and we were unable to recover it. 00:28:55.152 [2024-07-25 01:29:17.586024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.152 [2024-07-25 01:29:17.586067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.152 qpair failed and we were unable to recover it. 00:28:55.152 [2024-07-25 01:29:17.586517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.152 [2024-07-25 01:29:17.586548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.152 qpair failed and we were unable to recover it. 00:28:55.152 [2024-07-25 01:29:17.587214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.152 [2024-07-25 01:29:17.587246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.152 qpair failed and we were unable to recover it. 00:28:55.152 [2024-07-25 01:29:17.587636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.152 [2024-07-25 01:29:17.587666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.152 qpair failed and we were unable to recover it. 00:28:55.152 [2024-07-25 01:29:17.587914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.152 [2024-07-25 01:29:17.587944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.152 qpair failed and we were unable to recover it. 00:28:55.152 [2024-07-25 01:29:17.588333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.152 [2024-07-25 01:29:17.588365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.152 qpair failed and we were unable to recover it. 00:28:55.152 [2024-07-25 01:29:17.588805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.152 [2024-07-25 01:29:17.588836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.152 qpair failed and we were unable to recover it. 00:28:55.152 [2024-07-25 01:29:17.589216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.152 [2024-07-25 01:29:17.589248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.152 qpair failed and we were unable to recover it. 00:28:55.152 [2024-07-25 01:29:17.589694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.152 [2024-07-25 01:29:17.589725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.152 qpair failed and we were unable to recover it. 00:28:55.152 [2024-07-25 01:29:17.590192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.152 [2024-07-25 01:29:17.590224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.152 qpair failed and we were unable to recover it. 00:28:55.152 [2024-07-25 01:29:17.590590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.152 [2024-07-25 01:29:17.590621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.152 qpair failed and we were unable to recover it. 00:28:55.152 [2024-07-25 01:29:17.591019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.152 [2024-07-25 01:29:17.591060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.152 qpair failed and we were unable to recover it. 00:28:55.152 [2024-07-25 01:29:17.591439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.152 [2024-07-25 01:29:17.591470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.152 qpair failed and we were unable to recover it. 00:28:55.152 [2024-07-25 01:29:17.591851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.152 [2024-07-25 01:29:17.591881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.152 qpair failed and we were unable to recover it. 00:28:55.152 [2024-07-25 01:29:17.592257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.152 [2024-07-25 01:29:17.592288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.152 qpair failed and we were unable to recover it. 00:28:55.152 [2024-07-25 01:29:17.592740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.152 [2024-07-25 01:29:17.592771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.152 qpair failed and we were unable to recover it. 00:28:55.152 [2024-07-25 01:29:17.593232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.152 [2024-07-25 01:29:17.593273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.152 qpair failed and we were unable to recover it. 00:28:55.152 [2024-07-25 01:29:17.593609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.152 [2024-07-25 01:29:17.593624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.152 qpair failed and we were unable to recover it. 00:28:55.152 [2024-07-25 01:29:17.593792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.152 [2024-07-25 01:29:17.593807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.152 qpair failed and we were unable to recover it. 00:28:55.152 [2024-07-25 01:29:17.594173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.152 [2024-07-25 01:29:17.594205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.152 qpair failed and we were unable to recover it. 00:28:55.152 [2024-07-25 01:29:17.594600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.152 [2024-07-25 01:29:17.594630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.152 qpair failed and we were unable to recover it. 00:28:55.152 [2024-07-25 01:29:17.595030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.152 [2024-07-25 01:29:17.595050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.152 qpair failed and we were unable to recover it. 00:28:55.152 [2024-07-25 01:29:17.595451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.152 [2024-07-25 01:29:17.595467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.152 qpair failed and we were unable to recover it. 00:28:55.152 [2024-07-25 01:29:17.595937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.152 [2024-07-25 01:29:17.595951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.152 qpair failed and we were unable to recover it. 00:28:55.152 [2024-07-25 01:29:17.596324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.152 [2024-07-25 01:29:17.596356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.152 qpair failed and we were unable to recover it. 00:28:55.152 [2024-07-25 01:29:17.596804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.152 [2024-07-25 01:29:17.596835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.152 qpair failed and we were unable to recover it. 00:28:55.152 [2024-07-25 01:29:17.597235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.152 [2024-07-25 01:29:17.597266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.152 qpair failed and we were unable to recover it. 00:28:55.152 [2024-07-25 01:29:17.598672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.152 [2024-07-25 01:29:17.598699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.152 qpair failed and we were unable to recover it. 00:28:55.152 [2024-07-25 01:29:17.599127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.152 [2024-07-25 01:29:17.599144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.152 qpair failed and we were unable to recover it. 00:28:55.152 [2024-07-25 01:29:17.599880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.153 [2024-07-25 01:29:17.599906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.153 qpair failed and we were unable to recover it. 00:28:55.153 [2024-07-25 01:29:17.600372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.153 [2024-07-25 01:29:17.600389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.153 qpair failed and we were unable to recover it. 00:28:55.153 [2024-07-25 01:29:17.600861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.153 [2024-07-25 01:29:17.600876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.153 qpair failed and we were unable to recover it. 00:28:55.153 [2024-07-25 01:29:17.601281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.153 [2024-07-25 01:29:17.601297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.153 qpair failed and we were unable to recover it. 00:28:55.153 [2024-07-25 01:29:17.601719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.153 [2024-07-25 01:29:17.601733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.153 qpair failed and we were unable to recover it. 00:28:55.153 [2024-07-25 01:29:17.602150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.153 [2024-07-25 01:29:17.602169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.153 qpair failed and we were unable to recover it. 00:28:55.153 [2024-07-25 01:29:17.602513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.153 [2024-07-25 01:29:17.602528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.153 qpair failed and we were unable to recover it. 00:28:55.153 [2024-07-25 01:29:17.602991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.153 [2024-07-25 01:29:17.603006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.153 qpair failed and we were unable to recover it. 00:28:55.153 [2024-07-25 01:29:17.603358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.153 [2024-07-25 01:29:17.603373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.153 qpair failed and we were unable to recover it. 00:28:55.153 [2024-07-25 01:29:17.603723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.153 [2024-07-25 01:29:17.603738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.153 qpair failed and we were unable to recover it. 00:28:55.153 [2024-07-25 01:29:17.604161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.153 [2024-07-25 01:29:17.604176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.153 qpair failed and we were unable to recover it. 00:28:55.153 [2024-07-25 01:29:17.604639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.153 [2024-07-25 01:29:17.604654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.153 qpair failed and we were unable to recover it. 00:28:55.153 [2024-07-25 01:29:17.604883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.153 [2024-07-25 01:29:17.604897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.153 qpair failed and we were unable to recover it. 00:28:55.153 [2024-07-25 01:29:17.605251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.153 [2024-07-25 01:29:17.605267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.153 qpair failed and we were unable to recover it. 00:28:55.153 [2024-07-25 01:29:17.605770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.153 [2024-07-25 01:29:17.605785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.153 qpair failed and we were unable to recover it. 00:28:55.153 [2024-07-25 01:29:17.606226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.153 [2024-07-25 01:29:17.606242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.153 qpair failed and we were unable to recover it. 00:28:55.153 [2024-07-25 01:29:17.606576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.153 [2024-07-25 01:29:17.606591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.153 qpair failed and we were unable to recover it. 00:28:55.153 [2024-07-25 01:29:17.606950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.153 [2024-07-25 01:29:17.606965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.153 qpair failed and we were unable to recover it. 00:28:55.153 [2024-07-25 01:29:17.607321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.153 [2024-07-25 01:29:17.607336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.153 qpair failed and we were unable to recover it. 00:28:55.153 [2024-07-25 01:29:17.607803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.153 [2024-07-25 01:29:17.607818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.153 qpair failed and we were unable to recover it. 00:28:55.153 [2024-07-25 01:29:17.608234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.153 [2024-07-25 01:29:17.608250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.153 qpair failed and we were unable to recover it. 00:28:55.153 [2024-07-25 01:29:17.608740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.153 [2024-07-25 01:29:17.608755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.153 qpair failed and we were unable to recover it. 00:28:55.153 [2024-07-25 01:29:17.609158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.153 [2024-07-25 01:29:17.609173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.153 qpair failed and we were unable to recover it. 00:28:55.153 [2024-07-25 01:29:17.609608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.153 [2024-07-25 01:29:17.609623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.153 qpair failed and we were unable to recover it. 00:28:55.153 [2024-07-25 01:29:17.610083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.153 [2024-07-25 01:29:17.610098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.153 qpair failed and we were unable to recover it. 00:28:55.153 [2024-07-25 01:29:17.610500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.153 [2024-07-25 01:29:17.610515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.153 qpair failed and we were unable to recover it. 00:28:55.153 [2024-07-25 01:29:17.610931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.153 [2024-07-25 01:29:17.610946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.153 qpair failed and we were unable to recover it. 00:28:55.153 [2024-07-25 01:29:17.611358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.153 [2024-07-25 01:29:17.611373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.153 qpair failed and we were unable to recover it. 00:28:55.153 [2024-07-25 01:29:17.611778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.153 [2024-07-25 01:29:17.611793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.153 qpair failed and we were unable to recover it. 00:28:55.153 [2024-07-25 01:29:17.612291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.153 [2024-07-25 01:29:17.612306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.153 qpair failed and we were unable to recover it. 00:28:55.153 [2024-07-25 01:29:17.612716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.153 [2024-07-25 01:29:17.612731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.153 qpair failed and we were unable to recover it. 00:28:55.153 [2024-07-25 01:29:17.613139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.153 [2024-07-25 01:29:17.613155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.153 qpair failed and we were unable to recover it. 00:28:55.153 [2024-07-25 01:29:17.613620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.153 [2024-07-25 01:29:17.613635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.153 qpair failed and we were unable to recover it. 00:28:55.153 [2024-07-25 01:29:17.614054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.153 [2024-07-25 01:29:17.614070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.153 qpair failed and we were unable to recover it. 00:28:55.153 [2024-07-25 01:29:17.614483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.153 [2024-07-25 01:29:17.614498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.153 qpair failed and we were unable to recover it. 00:28:55.153 [2024-07-25 01:29:17.614898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.153 [2024-07-25 01:29:17.614913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.153 qpair failed and we were unable to recover it. 00:28:55.153 [2024-07-25 01:29:17.615375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.153 [2024-07-25 01:29:17.615391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.153 qpair failed and we were unable to recover it. 00:28:55.153 [2024-07-25 01:29:17.615705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.153 [2024-07-25 01:29:17.615720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.153 qpair failed and we were unable to recover it. 00:28:55.154 [2024-07-25 01:29:17.616068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.154 [2024-07-25 01:29:17.616083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.154 qpair failed and we were unable to recover it. 00:28:55.154 [2024-07-25 01:29:17.616522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.154 [2024-07-25 01:29:17.616537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.154 qpair failed and we were unable to recover it. 00:28:55.154 [2024-07-25 01:29:17.617021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.154 [2024-07-25 01:29:17.617036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.154 qpair failed and we were unable to recover it. 00:28:55.154 [2024-07-25 01:29:17.617474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.154 [2024-07-25 01:29:17.617490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.154 qpair failed and we were unable to recover it. 00:28:55.154 [2024-07-25 01:29:17.617974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.154 [2024-07-25 01:29:17.617989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.154 qpair failed and we were unable to recover it. 00:28:55.154 [2024-07-25 01:29:17.618340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.154 [2024-07-25 01:29:17.618356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.154 qpair failed and we were unable to recover it. 00:28:55.154 [2024-07-25 01:29:17.618697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.154 [2024-07-25 01:29:17.618713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.154 qpair failed and we were unable to recover it. 00:28:55.423 [2024-07-25 01:29:17.619202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.423 [2024-07-25 01:29:17.619222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.423 qpair failed and we were unable to recover it. 00:28:55.423 [2024-07-25 01:29:17.619582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.423 [2024-07-25 01:29:17.619598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.423 qpair failed and we were unable to recover it. 00:28:55.423 [2024-07-25 01:29:17.620060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.423 [2024-07-25 01:29:17.620076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.423 qpair failed and we were unable to recover it. 00:28:55.423 [2024-07-25 01:29:17.620518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.423 [2024-07-25 01:29:17.620533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.423 qpair failed and we were unable to recover it. 00:28:55.423 [2024-07-25 01:29:17.620893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.423 [2024-07-25 01:29:17.620907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.423 qpair failed and we were unable to recover it. 00:28:55.423 [2024-07-25 01:29:17.621392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.423 [2024-07-25 01:29:17.621409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.423 qpair failed and we were unable to recover it. 00:28:55.423 [2024-07-25 01:29:17.621825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.423 [2024-07-25 01:29:17.621840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.423 qpair failed and we were unable to recover it. 00:28:55.423 [2024-07-25 01:29:17.622302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.423 [2024-07-25 01:29:17.622318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.423 qpair failed and we were unable to recover it. 00:28:55.423 [2024-07-25 01:29:17.622781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.423 [2024-07-25 01:29:17.622796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.423 qpair failed and we were unable to recover it. 00:28:55.423 [2024-07-25 01:29:17.623077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.423 [2024-07-25 01:29:17.623092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.423 qpair failed and we were unable to recover it. 00:28:55.423 [2024-07-25 01:29:17.623457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.423 [2024-07-25 01:29:17.623473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.423 qpair failed and we were unable to recover it. 00:28:55.423 [2024-07-25 01:29:17.623814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.423 [2024-07-25 01:29:17.623829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.423 qpair failed and we were unable to recover it. 00:28:55.423 [2024-07-25 01:29:17.624290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.423 [2024-07-25 01:29:17.624305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.423 qpair failed and we were unable to recover it. 00:28:55.423 [2024-07-25 01:29:17.624773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.423 [2024-07-25 01:29:17.624789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.423 qpair failed and we were unable to recover it. 00:28:55.423 [2024-07-25 01:29:17.625163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.423 [2024-07-25 01:29:17.625179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.423 qpair failed and we were unable to recover it. 00:28:55.423 [2024-07-25 01:29:17.625600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.423 [2024-07-25 01:29:17.625615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.423 qpair failed and we were unable to recover it. 00:28:55.423 [2024-07-25 01:29:17.626012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.423 [2024-07-25 01:29:17.626027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.423 qpair failed and we were unable to recover it. 00:28:55.423 [2024-07-25 01:29:17.626446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.423 [2024-07-25 01:29:17.626461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.423 qpair failed and we were unable to recover it. 00:28:55.423 [2024-07-25 01:29:17.626814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.423 [2024-07-25 01:29:17.626829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.423 qpair failed and we were unable to recover it. 00:28:55.423 [2024-07-25 01:29:17.627239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.423 [2024-07-25 01:29:17.627255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.423 qpair failed and we were unable to recover it. 00:28:55.423 [2024-07-25 01:29:17.627597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.423 [2024-07-25 01:29:17.627613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.423 qpair failed and we were unable to recover it. 00:28:55.423 [2024-07-25 01:29:17.627949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.423 [2024-07-25 01:29:17.627964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.423 qpair failed and we were unable to recover it. 00:28:55.423 [2024-07-25 01:29:17.628370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.423 [2024-07-25 01:29:17.628386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.423 qpair failed and we were unable to recover it. 00:28:55.423 [2024-07-25 01:29:17.628795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.423 [2024-07-25 01:29:17.628810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.423 qpair failed and we were unable to recover it. 00:28:55.423 [2024-07-25 01:29:17.629222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.423 [2024-07-25 01:29:17.629238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.423 qpair failed and we were unable to recover it. 00:28:55.423 [2024-07-25 01:29:17.629592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.423 [2024-07-25 01:29:17.629606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.423 qpair failed and we were unable to recover it. 00:28:55.423 [2024-07-25 01:29:17.630011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.423 [2024-07-25 01:29:17.630025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.423 qpair failed and we were unable to recover it. 00:28:55.423 [2024-07-25 01:29:17.630483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.423 [2024-07-25 01:29:17.630499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.423 qpair failed and we were unable to recover it. 00:28:55.423 [2024-07-25 01:29:17.630844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.423 [2024-07-25 01:29:17.630859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.423 qpair failed and we were unable to recover it. 00:28:55.423 [2024-07-25 01:29:17.631351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.423 [2024-07-25 01:29:17.631367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.423 qpair failed and we were unable to recover it. 00:28:55.423 [2024-07-25 01:29:17.631812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.423 [2024-07-25 01:29:17.631827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.423 qpair failed and we were unable to recover it. 00:28:55.423 [2024-07-25 01:29:17.632190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.423 [2024-07-25 01:29:17.632204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.423 qpair failed and we were unable to recover it. 00:28:55.424 [2024-07-25 01:29:17.632496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.424 [2024-07-25 01:29:17.632511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.424 qpair failed and we were unable to recover it. 00:28:55.424 [2024-07-25 01:29:17.632986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.424 [2024-07-25 01:29:17.633001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.424 qpair failed and we were unable to recover it. 00:28:55.424 [2024-07-25 01:29:17.633672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.424 [2024-07-25 01:29:17.633687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.424 qpair failed and we were unable to recover it. 00:28:55.424 [2024-07-25 01:29:17.634092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.424 [2024-07-25 01:29:17.634107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.424 qpair failed and we were unable to recover it. 00:28:55.424 [2024-07-25 01:29:17.634508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.424 [2024-07-25 01:29:17.634523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.424 qpair failed and we were unable to recover it. 00:28:55.424 [2024-07-25 01:29:17.634964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.424 [2024-07-25 01:29:17.634979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.424 qpair failed and we were unable to recover it. 00:28:55.424 [2024-07-25 01:29:17.635376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.424 [2024-07-25 01:29:17.635391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.424 qpair failed and we were unable to recover it. 00:28:55.424 [2024-07-25 01:29:17.635857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.424 [2024-07-25 01:29:17.635872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.424 qpair failed and we were unable to recover it. 00:28:55.424 [2024-07-25 01:29:17.636355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.424 [2024-07-25 01:29:17.636374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.424 qpair failed and we were unable to recover it. 00:28:55.424 [2024-07-25 01:29:17.636529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.424 [2024-07-25 01:29:17.636544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.424 qpair failed and we were unable to recover it. 00:28:55.424 [2024-07-25 01:29:17.636978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.424 [2024-07-25 01:29:17.636992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.424 qpair failed and we were unable to recover it. 00:28:55.424 [2024-07-25 01:29:17.637406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.424 [2024-07-25 01:29:17.637422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.424 qpair failed and we were unable to recover it. 00:28:55.424 [2024-07-25 01:29:17.637779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.424 [2024-07-25 01:29:17.637793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.424 qpair failed and we were unable to recover it. 00:28:55.424 [2024-07-25 01:29:17.638221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.424 [2024-07-25 01:29:17.638237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.424 qpair failed and we were unable to recover it. 00:28:55.424 [2024-07-25 01:29:17.638714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.424 [2024-07-25 01:29:17.638729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.424 qpair failed and we were unable to recover it. 00:28:55.424 [2024-07-25 01:29:17.639211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.424 [2024-07-25 01:29:17.639226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.424 qpair failed and we were unable to recover it. 00:28:55.424 [2024-07-25 01:29:17.639647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.424 [2024-07-25 01:29:17.639663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.424 qpair failed and we were unable to recover it. 00:28:55.424 [2024-07-25 01:29:17.640114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.424 [2024-07-25 01:29:17.640129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.424 qpair failed and we were unable to recover it. 00:28:55.424 [2024-07-25 01:29:17.640532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.424 [2024-07-25 01:29:17.640547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.424 qpair failed and we were unable to recover it. 00:28:55.424 [2024-07-25 01:29:17.640882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.424 [2024-07-25 01:29:17.640897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.424 qpair failed and we were unable to recover it. 00:28:55.424 [2024-07-25 01:29:17.641249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.424 [2024-07-25 01:29:17.641264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.424 qpair failed and we were unable to recover it. 00:28:55.424 [2024-07-25 01:29:17.641679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.424 [2024-07-25 01:29:17.641694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.424 qpair failed and we were unable to recover it. 00:28:55.424 [2024-07-25 01:29:17.642162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.424 [2024-07-25 01:29:17.642178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.424 qpair failed and we were unable to recover it. 00:28:55.424 [2024-07-25 01:29:17.642539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.424 [2024-07-25 01:29:17.642555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.424 qpair failed and we were unable to recover it. 00:28:55.424 [2024-07-25 01:29:17.643048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.424 [2024-07-25 01:29:17.643065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.424 qpair failed and we were unable to recover it. 00:28:55.424 [2024-07-25 01:29:17.643500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.424 [2024-07-25 01:29:17.643515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.424 qpair failed and we were unable to recover it. 00:28:55.424 [2024-07-25 01:29:17.643935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.424 [2024-07-25 01:29:17.643950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.424 qpair failed and we were unable to recover it. 00:28:55.424 [2024-07-25 01:29:17.644301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.424 [2024-07-25 01:29:17.644317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.424 qpair failed and we were unable to recover it. 00:28:55.424 [2024-07-25 01:29:17.644735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.424 [2024-07-25 01:29:17.644750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.424 qpair failed and we were unable to recover it. 00:28:55.424 [2024-07-25 01:29:17.645214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.424 [2024-07-25 01:29:17.645230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.424 qpair failed and we were unable to recover it. 00:28:55.424 [2024-07-25 01:29:17.645573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.424 [2024-07-25 01:29:17.645588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.424 qpair failed and we were unable to recover it. 00:28:55.424 [2024-07-25 01:29:17.645984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.424 [2024-07-25 01:29:17.645998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.424 qpair failed and we were unable to recover it. 00:28:55.424 [2024-07-25 01:29:17.646403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.424 [2024-07-25 01:29:17.646419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.424 qpair failed and we were unable to recover it. 00:28:55.424 [2024-07-25 01:29:17.646812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.424 [2024-07-25 01:29:17.646826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.424 qpair failed and we were unable to recover it. 00:28:55.424 [2024-07-25 01:29:17.647345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.424 [2024-07-25 01:29:17.647361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.424 qpair failed and we were unable to recover it. 00:28:55.424 [2024-07-25 01:29:17.647823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.424 [2024-07-25 01:29:17.647840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.424 qpair failed and we were unable to recover it. 00:28:55.424 [2024-07-25 01:29:17.648251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.424 [2024-07-25 01:29:17.648267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.424 qpair failed and we were unable to recover it. 00:28:55.424 [2024-07-25 01:29:17.648667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.425 [2024-07-25 01:29:17.648682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.425 qpair failed and we were unable to recover it. 00:28:55.425 [2024-07-25 01:29:17.649145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.425 [2024-07-25 01:29:17.649160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.425 qpair failed and we were unable to recover it. 00:28:55.425 [2024-07-25 01:29:17.649570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.425 [2024-07-25 01:29:17.649585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.425 qpair failed and we were unable to recover it. 00:28:55.425 [2024-07-25 01:29:17.650052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.425 [2024-07-25 01:29:17.650067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.425 qpair failed and we were unable to recover it. 00:28:55.425 [2024-07-25 01:29:17.650425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.425 [2024-07-25 01:29:17.650439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.425 qpair failed and we were unable to recover it. 00:28:55.425 [2024-07-25 01:29:17.650781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.425 [2024-07-25 01:29:17.650796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.425 qpair failed and we were unable to recover it. 00:28:55.425 [2024-07-25 01:29:17.651281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.425 [2024-07-25 01:29:17.651297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.425 qpair failed and we were unable to recover it. 00:28:55.425 [2024-07-25 01:29:17.651733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.425 [2024-07-25 01:29:17.651747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.425 qpair failed and we were unable to recover it. 00:28:55.425 [2024-07-25 01:29:17.651994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.425 [2024-07-25 01:29:17.652009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.425 qpair failed and we were unable to recover it. 00:28:55.425 [2024-07-25 01:29:17.652457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.425 [2024-07-25 01:29:17.652472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.425 qpair failed and we were unable to recover it. 00:28:55.425 [2024-07-25 01:29:17.652878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.425 [2024-07-25 01:29:17.652893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.425 qpair failed and we were unable to recover it. 00:28:55.425 [2024-07-25 01:29:17.653357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.425 [2024-07-25 01:29:17.653372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.425 qpair failed and we were unable to recover it. 00:28:55.425 [2024-07-25 01:29:17.653779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.425 [2024-07-25 01:29:17.653794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.425 qpair failed and we were unable to recover it. 00:28:55.425 [2024-07-25 01:29:17.654257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.425 [2024-07-25 01:29:17.654273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.425 qpair failed and we were unable to recover it. 00:28:55.425 [2024-07-25 01:29:17.654688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.425 [2024-07-25 01:29:17.654703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.425 qpair failed and we were unable to recover it. 00:28:55.425 [2024-07-25 01:29:17.655117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.425 [2024-07-25 01:29:17.655132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.425 qpair failed and we were unable to recover it. 00:28:55.425 [2024-07-25 01:29:17.655547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.425 [2024-07-25 01:29:17.655563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.425 qpair failed and we were unable to recover it. 00:28:55.425 [2024-07-25 01:29:17.655994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.425 [2024-07-25 01:29:17.656009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.425 qpair failed and we were unable to recover it. 00:28:55.425 [2024-07-25 01:29:17.656369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.425 [2024-07-25 01:29:17.656383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.425 qpair failed and we were unable to recover it. 00:28:55.425 [2024-07-25 01:29:17.656734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.425 [2024-07-25 01:29:17.656749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.425 qpair failed and we were unable to recover it. 00:28:55.425 [2024-07-25 01:29:17.657155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.425 [2024-07-25 01:29:17.657170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.425 qpair failed and we were unable to recover it. 00:28:55.425 [2024-07-25 01:29:17.657613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.425 [2024-07-25 01:29:17.657627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.425 qpair failed and we were unable to recover it. 00:28:55.425 [2024-07-25 01:29:17.658089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.425 [2024-07-25 01:29:17.658104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.425 qpair failed and we were unable to recover it. 00:28:55.425 [2024-07-25 01:29:17.658460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.425 [2024-07-25 01:29:17.658475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.425 qpair failed and we were unable to recover it. 00:28:55.425 [2024-07-25 01:29:17.658899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.425 [2024-07-25 01:29:17.658914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.425 qpair failed and we were unable to recover it. 00:28:55.425 [2024-07-25 01:29:17.659329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.425 [2024-07-25 01:29:17.659345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.425 qpair failed and we were unable to recover it. 00:28:55.425 [2024-07-25 01:29:17.659741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.425 [2024-07-25 01:29:17.659755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.425 qpair failed and we were unable to recover it. 00:28:55.425 [2024-07-25 01:29:17.660220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.425 [2024-07-25 01:29:17.660235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.425 qpair failed and we were unable to recover it. 00:28:55.425 [2024-07-25 01:29:17.660720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.425 [2024-07-25 01:29:17.660735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.425 qpair failed and we were unable to recover it. 00:28:55.425 [2024-07-25 01:29:17.661165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.425 [2024-07-25 01:29:17.661180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.425 qpair failed and we were unable to recover it. 00:28:55.425 [2024-07-25 01:29:17.661535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.425 [2024-07-25 01:29:17.661550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.425 qpair failed and we were unable to recover it. 00:28:55.425 [2024-07-25 01:29:17.661899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.425 [2024-07-25 01:29:17.661913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.425 qpair failed and we were unable to recover it. 00:28:55.425 [2024-07-25 01:29:17.662375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.425 [2024-07-25 01:29:17.662391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.425 qpair failed and we were unable to recover it. 00:28:55.425 [2024-07-25 01:29:17.662781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.425 [2024-07-25 01:29:17.662795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.425 qpair failed and we were unable to recover it. 00:28:55.425 [2024-07-25 01:29:17.663280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.425 [2024-07-25 01:29:17.663296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.425 qpair failed and we were unable to recover it. 00:28:55.425 [2024-07-25 01:29:17.663758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.425 [2024-07-25 01:29:17.663772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.425 qpair failed and we were unable to recover it. 00:28:55.425 [2024-07-25 01:29:17.664183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.425 [2024-07-25 01:29:17.664198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.425 qpair failed and we were unable to recover it. 00:28:55.425 [2024-07-25 01:29:17.664596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.425 [2024-07-25 01:29:17.664611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.425 qpair failed and we were unable to recover it. 00:28:55.426 [2024-07-25 01:29:17.665032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.426 [2024-07-25 01:29:17.665056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.426 qpair failed and we were unable to recover it. 00:28:55.426 [2024-07-25 01:29:17.665543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.426 [2024-07-25 01:29:17.665559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.426 qpair failed and we were unable to recover it. 00:28:55.426 [2024-07-25 01:29:17.665965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.426 [2024-07-25 01:29:17.665980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.426 qpair failed and we were unable to recover it. 00:28:55.426 [2024-07-25 01:29:17.666385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.426 [2024-07-25 01:29:17.666401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.426 qpair failed and we were unable to recover it. 00:28:55.426 [2024-07-25 01:29:17.666810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.426 [2024-07-25 01:29:17.666826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.426 qpair failed and we were unable to recover it. 00:28:55.426 [2024-07-25 01:29:17.667258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.426 [2024-07-25 01:29:17.667273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.426 qpair failed and we were unable to recover it. 00:28:55.426 [2024-07-25 01:29:17.667735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.426 [2024-07-25 01:29:17.667751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.426 qpair failed and we were unable to recover it. 00:28:55.426 [2024-07-25 01:29:17.668113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.426 [2024-07-25 01:29:17.668129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.426 qpair failed and we were unable to recover it. 00:28:55.426 [2024-07-25 01:29:17.668537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.426 [2024-07-25 01:29:17.668551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.426 qpair failed and we were unable to recover it. 00:28:55.426 [2024-07-25 01:29:17.668960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.426 [2024-07-25 01:29:17.668975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.426 qpair failed and we were unable to recover it. 00:28:55.426 [2024-07-25 01:29:17.669379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.426 [2024-07-25 01:29:17.669394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.426 qpair failed and we were unable to recover it. 00:28:55.426 [2024-07-25 01:29:17.669807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.426 [2024-07-25 01:29:17.669821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.426 qpair failed and we were unable to recover it. 00:28:55.426 [2024-07-25 01:29:17.670225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.426 [2024-07-25 01:29:17.670240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.426 qpair failed and we were unable to recover it. 00:28:55.426 [2024-07-25 01:29:17.670653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.426 [2024-07-25 01:29:17.670668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.426 qpair failed and we were unable to recover it. 00:28:55.426 [2024-07-25 01:29:17.671158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.426 [2024-07-25 01:29:17.671173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.426 qpair failed and we were unable to recover it. 00:28:55.426 [2024-07-25 01:29:17.671654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.426 [2024-07-25 01:29:17.671669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.426 qpair failed and we were unable to recover it. 00:28:55.426 [2024-07-25 01:29:17.672068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.426 [2024-07-25 01:29:17.672083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.426 qpair failed and we were unable to recover it. 00:28:55.426 [2024-07-25 01:29:17.672461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.426 [2024-07-25 01:29:17.672476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.426 qpair failed and we were unable to recover it. 00:28:55.426 [2024-07-25 01:29:17.672833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.426 [2024-07-25 01:29:17.672848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.426 qpair failed and we were unable to recover it. 00:28:55.426 [2024-07-25 01:29:17.673242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.426 [2024-07-25 01:29:17.673257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.426 qpair failed and we were unable to recover it. 00:28:55.426 [2024-07-25 01:29:17.673616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.426 [2024-07-25 01:29:17.673631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.426 qpair failed and we were unable to recover it. 00:28:55.426 [2024-07-25 01:29:17.673982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.426 [2024-07-25 01:29:17.673996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.426 qpair failed and we were unable to recover it. 00:28:55.426 [2024-07-25 01:29:17.674401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.426 [2024-07-25 01:29:17.674416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.426 qpair failed and we were unable to recover it. 00:28:55.426 [2024-07-25 01:29:17.674956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.426 [2024-07-25 01:29:17.674971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.426 qpair failed and we were unable to recover it. 00:28:55.426 [2024-07-25 01:29:17.675450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.426 [2024-07-25 01:29:17.675465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.426 qpair failed and we were unable to recover it. 00:28:55.426 [2024-07-25 01:29:17.675871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.426 [2024-07-25 01:29:17.675886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.426 qpair failed and we were unable to recover it. 00:28:55.426 [2024-07-25 01:29:17.676372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.426 [2024-07-25 01:29:17.676387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.426 qpair failed and we were unable to recover it. 00:28:55.426 [2024-07-25 01:29:17.676804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.426 [2024-07-25 01:29:17.676819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.426 qpair failed and we were unable to recover it. 00:28:55.426 [2024-07-25 01:29:17.677307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.426 [2024-07-25 01:29:17.677322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.426 qpair failed and we were unable to recover it. 00:28:55.426 [2024-07-25 01:29:17.677672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.426 [2024-07-25 01:29:17.677687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.426 qpair failed and we were unable to recover it. 00:28:55.426 [2024-07-25 01:29:17.678102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.426 [2024-07-25 01:29:17.678118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.426 qpair failed and we were unable to recover it. 00:28:55.426 [2024-07-25 01:29:17.678526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.426 [2024-07-25 01:29:17.678541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.426 qpair failed and we were unable to recover it. 00:28:55.426 [2024-07-25 01:29:17.679027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.426 [2024-07-25 01:29:17.679055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.426 qpair failed and we were unable to recover it. 00:28:55.426 [2024-07-25 01:29:17.679461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.426 [2024-07-25 01:29:17.679476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.426 qpair failed and we were unable to recover it. 00:28:55.426 [2024-07-25 01:29:17.679811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.426 [2024-07-25 01:29:17.679826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.426 qpair failed and we were unable to recover it. 00:28:55.426 [2024-07-25 01:29:17.680319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.426 [2024-07-25 01:29:17.680335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.426 qpair failed and we were unable to recover it. 00:28:55.426 [2024-07-25 01:29:17.680795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.426 [2024-07-25 01:29:17.680810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.426 qpair failed and we were unable to recover it. 00:28:55.426 [2024-07-25 01:29:17.681274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.426 [2024-07-25 01:29:17.681289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.426 qpair failed and we were unable to recover it. 00:28:55.427 [2024-07-25 01:29:17.681772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.427 [2024-07-25 01:29:17.681787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.427 qpair failed and we were unable to recover it. 00:28:55.427 [2024-07-25 01:29:17.682271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.427 [2024-07-25 01:29:17.682286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.427 qpair failed and we were unable to recover it. 00:28:55.427 [2024-07-25 01:29:17.682770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.427 [2024-07-25 01:29:17.682790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.427 qpair failed and we were unable to recover it. 00:28:55.427 [2024-07-25 01:29:17.683196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.427 [2024-07-25 01:29:17.683210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.427 qpair failed and we were unable to recover it. 00:28:55.427 [2024-07-25 01:29:17.683697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.427 [2024-07-25 01:29:17.683712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.427 qpair failed and we were unable to recover it. 00:28:55.427 [2024-07-25 01:29:17.684150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.427 [2024-07-25 01:29:17.684165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.427 qpair failed and we were unable to recover it. 00:28:55.427 [2024-07-25 01:29:17.684650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.427 [2024-07-25 01:29:17.684665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.427 qpair failed and we were unable to recover it. 00:28:55.427 [2024-07-25 01:29:17.685149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.427 [2024-07-25 01:29:17.685165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.427 qpair failed and we were unable to recover it. 00:28:55.427 [2024-07-25 01:29:17.685520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.427 [2024-07-25 01:29:17.685534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.427 qpair failed and we were unable to recover it. 00:28:55.427 [2024-07-25 01:29:17.686022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.427 [2024-07-25 01:29:17.686037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.427 qpair failed and we were unable to recover it. 00:28:55.427 [2024-07-25 01:29:17.686459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.427 [2024-07-25 01:29:17.686475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.427 qpair failed and we were unable to recover it. 00:28:55.427 [2024-07-25 01:29:17.686935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.427 [2024-07-25 01:29:17.686949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.427 qpair failed and we were unable to recover it. 00:28:55.427 [2024-07-25 01:29:17.687433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.427 [2024-07-25 01:29:17.687449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.427 qpair failed and we were unable to recover it. 00:28:55.427 [2024-07-25 01:29:17.687869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.427 [2024-07-25 01:29:17.687884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.427 qpair failed and we were unable to recover it. 00:28:55.427 [2024-07-25 01:29:17.688368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.427 [2024-07-25 01:29:17.688383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.427 qpair failed and we were unable to recover it. 00:28:55.427 [2024-07-25 01:29:17.688795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.427 [2024-07-25 01:29:17.688810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.427 qpair failed and we were unable to recover it. 00:28:55.427 [2024-07-25 01:29:17.689248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.427 [2024-07-25 01:29:17.689264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.427 qpair failed and we were unable to recover it. 00:28:55.427 [2024-07-25 01:29:17.689662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.427 [2024-07-25 01:29:17.689677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.427 qpair failed and we were unable to recover it. 00:28:55.427 [2024-07-25 01:29:17.690026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.427 [2024-07-25 01:29:17.690041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.427 qpair failed and we were unable to recover it. 00:28:55.427 [2024-07-25 01:29:17.690406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.427 [2024-07-25 01:29:17.690421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.427 qpair failed and we were unable to recover it. 00:28:55.427 [2024-07-25 01:29:17.690772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.427 [2024-07-25 01:29:17.690786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.427 qpair failed and we were unable to recover it. 00:28:55.427 [2024-07-25 01:29:17.691275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.427 [2024-07-25 01:29:17.691290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.427 qpair failed and we were unable to recover it. 00:28:55.427 [2024-07-25 01:29:17.691702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.427 [2024-07-25 01:29:17.691716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.427 qpair failed and we were unable to recover it. 00:28:55.427 [2024-07-25 01:29:17.692204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.427 [2024-07-25 01:29:17.692219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.427 qpair failed and we were unable to recover it. 00:28:55.427 [2024-07-25 01:29:17.692648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.427 [2024-07-25 01:29:17.692662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.427 qpair failed and we were unable to recover it. 00:28:55.427 [2024-07-25 01:29:17.693059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.427 [2024-07-25 01:29:17.693074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.427 qpair failed and we were unable to recover it. 00:28:55.427 [2024-07-25 01:29:17.693479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.427 [2024-07-25 01:29:17.693493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.427 qpair failed and we were unable to recover it. 00:28:55.427 [2024-07-25 01:29:17.693955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.427 [2024-07-25 01:29:17.693970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.427 qpair failed and we were unable to recover it. 00:28:55.427 [2024-07-25 01:29:17.694442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.427 [2024-07-25 01:29:17.694457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.427 qpair failed and we were unable to recover it. 00:28:55.427 [2024-07-25 01:29:17.694859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.428 [2024-07-25 01:29:17.694874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.428 qpair failed and we were unable to recover it. 00:28:55.428 [2024-07-25 01:29:17.695280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.428 [2024-07-25 01:29:17.695296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.428 qpair failed and we were unable to recover it. 00:28:55.428 [2024-07-25 01:29:17.695696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.428 [2024-07-25 01:29:17.695711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.428 qpair failed and we were unable to recover it. 00:28:55.428 [2024-07-25 01:29:17.696156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.428 [2024-07-25 01:29:17.696172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.428 qpair failed and we were unable to recover it. 00:28:55.428 [2024-07-25 01:29:17.696527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.428 [2024-07-25 01:29:17.696542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.428 qpair failed and we were unable to recover it. 00:28:55.428 [2024-07-25 01:29:17.696898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.428 [2024-07-25 01:29:17.696913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.428 qpair failed and we were unable to recover it. 00:28:55.428 [2024-07-25 01:29:17.697318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.428 [2024-07-25 01:29:17.697334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.428 qpair failed and we were unable to recover it. 00:28:55.428 [2024-07-25 01:29:17.697745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.428 [2024-07-25 01:29:17.697760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.428 qpair failed and we were unable to recover it. 00:28:55.428 [2024-07-25 01:29:17.698163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.428 [2024-07-25 01:29:17.698178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.428 qpair failed and we were unable to recover it. 00:28:55.428 [2024-07-25 01:29:17.698609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.428 [2024-07-25 01:29:17.698623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.428 qpair failed and we were unable to recover it. 00:28:55.428 [2024-07-25 01:29:17.698971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.428 [2024-07-25 01:29:17.698985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.428 qpair failed and we were unable to recover it. 00:28:55.428 [2024-07-25 01:29:17.699419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.428 [2024-07-25 01:29:17.699434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.428 qpair failed and we were unable to recover it. 00:28:55.428 [2024-07-25 01:29:17.699837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.428 [2024-07-25 01:29:17.699852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.428 qpair failed and we were unable to recover it. 00:28:55.428 [2024-07-25 01:29:17.700265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.428 [2024-07-25 01:29:17.700283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.428 qpair failed and we were unable to recover it. 00:28:55.428 [2024-07-25 01:29:17.700705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.428 [2024-07-25 01:29:17.700719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.428 qpair failed and we were unable to recover it. 00:28:55.428 [2024-07-25 01:29:17.701181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.428 [2024-07-25 01:29:17.701197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.428 qpair failed and we were unable to recover it. 00:28:55.428 [2024-07-25 01:29:17.701660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.428 [2024-07-25 01:29:17.701675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.428 qpair failed and we were unable to recover it. 00:28:55.428 [2024-07-25 01:29:17.702139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.428 [2024-07-25 01:29:17.702154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.428 qpair failed and we were unable to recover it. 00:28:55.428 [2024-07-25 01:29:17.702553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.428 [2024-07-25 01:29:17.702568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.428 qpair failed and we were unable to recover it. 00:28:55.428 [2024-07-25 01:29:17.702963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.428 [2024-07-25 01:29:17.702979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.428 qpair failed and we were unable to recover it. 00:28:55.428 [2024-07-25 01:29:17.703460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.428 [2024-07-25 01:29:17.703476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.428 qpair failed and we were unable to recover it. 00:28:55.428 [2024-07-25 01:29:17.703939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.428 [2024-07-25 01:29:17.703954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.428 qpair failed and we were unable to recover it. 00:28:55.428 [2024-07-25 01:29:17.704362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.428 [2024-07-25 01:29:17.704377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.428 qpair failed and we were unable to recover it. 00:28:55.428 [2024-07-25 01:29:17.704840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.428 [2024-07-25 01:29:17.704855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.428 qpair failed and we were unable to recover it. 00:28:55.428 [2024-07-25 01:29:17.705294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.428 [2024-07-25 01:29:17.705309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.428 qpair failed and we were unable to recover it. 00:28:55.428 [2024-07-25 01:29:17.705736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.428 [2024-07-25 01:29:17.705751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.428 qpair failed and we were unable to recover it. 00:28:55.428 [2024-07-25 01:29:17.706235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.428 [2024-07-25 01:29:17.706251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.428 qpair failed and we were unable to recover it. 00:28:55.428 [2024-07-25 01:29:17.706740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.428 [2024-07-25 01:29:17.706755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.428 qpair failed and we were unable to recover it. 00:28:55.428 [2024-07-25 01:29:17.707242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.428 [2024-07-25 01:29:17.707257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.428 qpair failed and we were unable to recover it. 00:28:55.428 [2024-07-25 01:29:17.707741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.428 [2024-07-25 01:29:17.707755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.428 qpair failed and we were unable to recover it. 00:28:55.428 [2024-07-25 01:29:17.708167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.428 [2024-07-25 01:29:17.708182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.428 qpair failed and we were unable to recover it. 00:28:55.428 [2024-07-25 01:29:17.708589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.428 [2024-07-25 01:29:17.708604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.428 qpair failed and we were unable to recover it. 00:28:55.428 [2024-07-25 01:29:17.709090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.428 [2024-07-25 01:29:17.709104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.428 qpair failed and we were unable to recover it. 00:28:55.428 [2024-07-25 01:29:17.709593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.428 [2024-07-25 01:29:17.709607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.428 qpair failed and we were unable to recover it. 00:28:55.428 [2024-07-25 01:29:17.710098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.428 [2024-07-25 01:29:17.710114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.428 qpair failed and we were unable to recover it. 00:28:55.428 [2024-07-25 01:29:17.710545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.428 [2024-07-25 01:29:17.710560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.428 qpair failed and we were unable to recover it. 00:28:55.428 [2024-07-25 01:29:17.711050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.428 [2024-07-25 01:29:17.711066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.428 qpair failed and we were unable to recover it. 00:28:55.428 [2024-07-25 01:29:17.711493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.428 [2024-07-25 01:29:17.711508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.428 qpair failed and we were unable to recover it. 00:28:55.429 [2024-07-25 01:29:17.711947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.429 [2024-07-25 01:29:17.711962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.429 qpair failed and we were unable to recover it. 00:28:55.429 [2024-07-25 01:29:17.712450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.429 [2024-07-25 01:29:17.712465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.429 qpair failed and we were unable to recover it. 00:28:55.429 [2024-07-25 01:29:17.712953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.429 [2024-07-25 01:29:17.712968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.429 qpair failed and we were unable to recover it. 00:28:55.429 [2024-07-25 01:29:17.713380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.429 [2024-07-25 01:29:17.713395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.429 qpair failed and we were unable to recover it. 00:28:55.429 [2024-07-25 01:29:17.713876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.429 [2024-07-25 01:29:17.713891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.429 qpair failed and we were unable to recover it. 00:28:55.429 [2024-07-25 01:29:17.714377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.429 [2024-07-25 01:29:17.714392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.429 qpair failed and we were unable to recover it. 00:28:55.429 [2024-07-25 01:29:17.714794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.429 [2024-07-25 01:29:17.714809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.429 qpair failed and we were unable to recover it. 00:28:55.429 [2024-07-25 01:29:17.715217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.429 [2024-07-25 01:29:17.715233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.429 qpair failed and we were unable to recover it. 00:28:55.429 [2024-07-25 01:29:17.715667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.429 [2024-07-25 01:29:17.715682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.429 qpair failed and we were unable to recover it. 00:28:55.429 [2024-07-25 01:29:17.716134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.429 [2024-07-25 01:29:17.716149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.429 qpair failed and we were unable to recover it. 00:28:55.429 [2024-07-25 01:29:17.716630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.429 [2024-07-25 01:29:17.716645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.429 qpair failed and we were unable to recover it. 00:28:55.429 [2024-07-25 01:29:17.717108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.429 [2024-07-25 01:29:17.717124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.429 qpair failed and we were unable to recover it. 00:28:55.429 [2024-07-25 01:29:17.717526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.429 [2024-07-25 01:29:17.717541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.429 qpair failed and we were unable to recover it. 00:28:55.429 [2024-07-25 01:29:17.717953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.429 [2024-07-25 01:29:17.717967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.429 qpair failed and we were unable to recover it. 00:28:55.429 [2024-07-25 01:29:17.718381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.429 [2024-07-25 01:29:17.718396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.429 qpair failed and we were unable to recover it. 00:28:55.429 [2024-07-25 01:29:17.718876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.429 [2024-07-25 01:29:17.718894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.429 qpair failed and we were unable to recover it. 00:28:55.429 [2024-07-25 01:29:17.719245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.429 [2024-07-25 01:29:17.719261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.429 qpair failed and we were unable to recover it. 00:28:55.429 [2024-07-25 01:29:17.719668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.429 [2024-07-25 01:29:17.719683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.429 qpair failed and we were unable to recover it. 00:28:55.429 [2024-07-25 01:29:17.720179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.429 [2024-07-25 01:29:17.720194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.429 qpair failed and we were unable to recover it. 00:28:55.429 [2024-07-25 01:29:17.720622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.429 [2024-07-25 01:29:17.720638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.429 qpair failed and we were unable to recover it. 00:28:55.429 [2024-07-25 01:29:17.721126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.429 [2024-07-25 01:29:17.721142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.429 qpair failed and we were unable to recover it. 00:28:55.429 [2024-07-25 01:29:17.721549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.429 [2024-07-25 01:29:17.721564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.429 qpair failed and we were unable to recover it. 00:28:55.429 [2024-07-25 01:29:17.721976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.429 [2024-07-25 01:29:17.721991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.429 qpair failed and we were unable to recover it. 00:28:55.429 [2024-07-25 01:29:17.722399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.429 [2024-07-25 01:29:17.722414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.429 qpair failed and we were unable to recover it. 00:28:55.429 [2024-07-25 01:29:17.722826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.429 [2024-07-25 01:29:17.722842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.429 qpair failed and we were unable to recover it. 00:28:55.429 [2024-07-25 01:29:17.723251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.429 [2024-07-25 01:29:17.723266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.429 qpair failed and we were unable to recover it. 00:28:55.429 [2024-07-25 01:29:17.723749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.429 [2024-07-25 01:29:17.723764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.429 qpair failed and we were unable to recover it. 00:28:55.429 [2024-07-25 01:29:17.724250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.429 [2024-07-25 01:29:17.724266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.429 qpair failed and we were unable to recover it. 00:28:55.429 [2024-07-25 01:29:17.724704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.429 [2024-07-25 01:29:17.724719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.429 qpair failed and we were unable to recover it. 00:28:55.429 [2024-07-25 01:29:17.725151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.429 [2024-07-25 01:29:17.725167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.429 qpair failed and we were unable to recover it. 00:28:55.429 [2024-07-25 01:29:17.725568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.429 [2024-07-25 01:29:17.725583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.429 qpair failed and we were unable to recover it. 00:28:55.429 [2024-07-25 01:29:17.726047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.429 [2024-07-25 01:29:17.726063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.429 qpair failed and we were unable to recover it. 00:28:55.429 [2024-07-25 01:29:17.726548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.429 [2024-07-25 01:29:17.726563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.429 qpair failed and we were unable to recover it. 00:28:55.429 [2024-07-25 01:29:17.727055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.429 [2024-07-25 01:29:17.727071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.429 qpair failed and we were unable to recover it. 00:28:55.429 [2024-07-25 01:29:17.727514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.429 [2024-07-25 01:29:17.727529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.429 qpair failed and we were unable to recover it. 00:28:55.429 [2024-07-25 01:29:17.728011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.429 [2024-07-25 01:29:17.728026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.429 qpair failed and we were unable to recover it. 00:28:55.429 [2024-07-25 01:29:17.728491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.429 [2024-07-25 01:29:17.728506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.429 qpair failed and we were unable to recover it. 00:28:55.429 [2024-07-25 01:29:17.728900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.429 [2024-07-25 01:29:17.728915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.430 qpair failed and we were unable to recover it. 00:28:55.430 [2024-07-25 01:29:17.729397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.430 [2024-07-25 01:29:17.729413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.430 qpair failed and we were unable to recover it. 00:28:55.430 [2024-07-25 01:29:17.729886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.430 [2024-07-25 01:29:17.729901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.430 qpair failed and we were unable to recover it. 00:28:55.430 [2024-07-25 01:29:17.730364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.430 [2024-07-25 01:29:17.730379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.430 qpair failed and we were unable to recover it. 00:28:55.430 [2024-07-25 01:29:17.730888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.430 [2024-07-25 01:29:17.730903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.430 qpair failed and we were unable to recover it. 00:28:55.430 [2024-07-25 01:29:17.731391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.430 [2024-07-25 01:29:17.731406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.430 qpair failed and we were unable to recover it. 00:28:55.430 [2024-07-25 01:29:17.731890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.430 [2024-07-25 01:29:17.731904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.430 qpair failed and we were unable to recover it. 00:28:55.430 [2024-07-25 01:29:17.732313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.430 [2024-07-25 01:29:17.732328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.430 qpair failed and we were unable to recover it. 00:28:55.430 [2024-07-25 01:29:17.732757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.430 [2024-07-25 01:29:17.732773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.430 qpair failed and we were unable to recover it. 00:28:55.430 [2024-07-25 01:29:17.733235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.430 [2024-07-25 01:29:17.733250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.430 qpair failed and we were unable to recover it. 00:28:55.430 [2024-07-25 01:29:17.733663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.430 [2024-07-25 01:29:17.733678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.430 qpair failed and we were unable to recover it. 00:28:55.430 [2024-07-25 01:29:17.734106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.430 [2024-07-25 01:29:17.734121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.430 qpair failed and we were unable to recover it. 00:28:55.430 [2024-07-25 01:29:17.734334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.430 [2024-07-25 01:29:17.734349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.430 qpair failed and we were unable to recover it. 00:28:55.430 [2024-07-25 01:29:17.734823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.430 [2024-07-25 01:29:17.734838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.430 qpair failed and we were unable to recover it. 00:28:55.430 [2024-07-25 01:29:17.735284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.430 [2024-07-25 01:29:17.735300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.430 qpair failed and we were unable to recover it. 00:28:55.430 [2024-07-25 01:29:17.735767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.430 [2024-07-25 01:29:17.735781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.430 qpair failed and we were unable to recover it. 00:28:55.430 [2024-07-25 01:29:17.736180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.430 [2024-07-25 01:29:17.736196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.430 qpair failed and we were unable to recover it. 00:28:55.430 [2024-07-25 01:29:17.736681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.430 [2024-07-25 01:29:17.736695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.430 qpair failed and we were unable to recover it. 00:28:55.430 [2024-07-25 01:29:17.737091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.430 [2024-07-25 01:29:17.737109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.430 qpair failed and we were unable to recover it. 00:28:55.430 [2024-07-25 01:29:17.737534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.430 [2024-07-25 01:29:17.737550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.430 qpair failed and we were unable to recover it. 00:28:55.430 [2024-07-25 01:29:17.737955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.430 [2024-07-25 01:29:17.737969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.430 qpair failed and we were unable to recover it. 00:28:55.430 [2024-07-25 01:29:17.738326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.430 [2024-07-25 01:29:17.738342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.430 qpair failed and we were unable to recover it. 00:28:55.430 [2024-07-25 01:29:17.738750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.430 [2024-07-25 01:29:17.738765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.430 qpair failed and we were unable to recover it. 00:28:55.430 [2024-07-25 01:29:17.739227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.430 [2024-07-25 01:29:17.739243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.430 qpair failed and we were unable to recover it. 00:28:55.430 [2024-07-25 01:29:17.739724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.430 [2024-07-25 01:29:17.739739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.430 qpair failed and we were unable to recover it. 00:28:55.430 [2024-07-25 01:29:17.740154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.430 [2024-07-25 01:29:17.740170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.430 qpair failed and we were unable to recover it. 00:28:55.430 [2024-07-25 01:29:17.740563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.430 [2024-07-25 01:29:17.740577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.430 qpair failed and we were unable to recover it. 00:28:55.430 [2024-07-25 01:29:17.741038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.430 [2024-07-25 01:29:17.741064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.430 qpair failed and we were unable to recover it. 00:28:55.430 [2024-07-25 01:29:17.741497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.430 [2024-07-25 01:29:17.741512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.430 qpair failed and we were unable to recover it. 00:28:55.430 [2024-07-25 01:29:17.741999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.430 [2024-07-25 01:29:17.742014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.430 qpair failed and we were unable to recover it. 00:28:55.430 [2024-07-25 01:29:17.742478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.430 [2024-07-25 01:29:17.742494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.430 qpair failed and we were unable to recover it. 00:28:55.430 [2024-07-25 01:29:17.742908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.430 [2024-07-25 01:29:17.742923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.430 qpair failed and we were unable to recover it. 00:28:55.430 [2024-07-25 01:29:17.743338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.430 [2024-07-25 01:29:17.743354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.430 qpair failed and we were unable to recover it. 00:28:55.430 [2024-07-25 01:29:17.743818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.430 [2024-07-25 01:29:17.743832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.430 qpair failed and we were unable to recover it. 00:28:55.430 [2024-07-25 01:29:17.744241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.430 [2024-07-25 01:29:17.744256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.430 qpair failed and we were unable to recover it. 00:28:55.430 [2024-07-25 01:29:17.744719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.430 [2024-07-25 01:29:17.744733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.430 qpair failed and we were unable to recover it. 00:28:55.430 [2024-07-25 01:29:17.745150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.430 [2024-07-25 01:29:17.745165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.430 qpair failed and we were unable to recover it. 00:28:55.430 [2024-07-25 01:29:17.745627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.430 [2024-07-25 01:29:17.745641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.430 qpair failed and we were unable to recover it. 00:28:55.430 [2024-07-25 01:29:17.746106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.431 [2024-07-25 01:29:17.746122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.431 qpair failed and we were unable to recover it. 00:28:55.431 [2024-07-25 01:29:17.746556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.431 [2024-07-25 01:29:17.746570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.431 qpair failed and we were unable to recover it. 00:28:55.431 [2024-07-25 01:29:17.747035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.431 [2024-07-25 01:29:17.747058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.431 qpair failed and we were unable to recover it. 00:28:55.431 [2024-07-25 01:29:17.747461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.431 [2024-07-25 01:29:17.747476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.431 qpair failed and we were unable to recover it. 00:28:55.431 [2024-07-25 01:29:17.747885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.431 [2024-07-25 01:29:17.747900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.431 qpair failed and we were unable to recover it. 00:28:55.431 [2024-07-25 01:29:17.748382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.431 [2024-07-25 01:29:17.748397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.431 qpair failed and we were unable to recover it. 00:28:55.431 [2024-07-25 01:29:17.748808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.431 [2024-07-25 01:29:17.748822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.431 qpair failed and we were unable to recover it. 00:28:55.431 [2024-07-25 01:29:17.749315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.431 [2024-07-25 01:29:17.749331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.431 qpair failed and we were unable to recover it. 00:28:55.431 [2024-07-25 01:29:17.749813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.431 [2024-07-25 01:29:17.749828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.431 qpair failed and we were unable to recover it. 00:28:55.431 [2024-07-25 01:29:17.750244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.431 [2024-07-25 01:29:17.750259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.431 qpair failed and we were unable to recover it. 00:28:55.431 [2024-07-25 01:29:17.750721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.431 [2024-07-25 01:29:17.750736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.431 qpair failed and we were unable to recover it. 00:28:55.431 [2024-07-25 01:29:17.751167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.431 [2024-07-25 01:29:17.751182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.431 qpair failed and we were unable to recover it. 00:28:55.431 [2024-07-25 01:29:17.751538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.431 [2024-07-25 01:29:17.751551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.431 qpair failed and we were unable to recover it. 00:28:55.431 [2024-07-25 01:29:17.751694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.431 [2024-07-25 01:29:17.751708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.431 qpair failed and we were unable to recover it. 00:28:55.431 [2024-07-25 01:29:17.752173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.431 [2024-07-25 01:29:17.752195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.431 qpair failed and we were unable to recover it. 00:28:55.431 [2024-07-25 01:29:17.752689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.431 [2024-07-25 01:29:17.752704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.431 qpair failed and we were unable to recover it. 00:28:55.431 [2024-07-25 01:29:17.753137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.431 [2024-07-25 01:29:17.753152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.431 qpair failed and we were unable to recover it. 00:28:55.431 [2024-07-25 01:29:17.753571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.431 [2024-07-25 01:29:17.753586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.431 qpair failed and we were unable to recover it. 00:28:55.431 [2024-07-25 01:29:17.754018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.431 [2024-07-25 01:29:17.754033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.431 qpair failed and we were unable to recover it. 00:28:55.431 [2024-07-25 01:29:17.754402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.431 [2024-07-25 01:29:17.754416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.431 qpair failed and we were unable to recover it. 00:28:55.431 [2024-07-25 01:29:17.754854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.431 [2024-07-25 01:29:17.754871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.431 qpair failed and we were unable to recover it. 00:28:55.431 [2024-07-25 01:29:17.755285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.431 [2024-07-25 01:29:17.755301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.431 qpair failed and we were unable to recover it. 00:28:55.431 [2024-07-25 01:29:17.755704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.431 [2024-07-25 01:29:17.755719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.431 qpair failed and we were unable to recover it. 00:28:55.431 [2024-07-25 01:29:17.756133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.431 [2024-07-25 01:29:17.756149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.431 qpair failed and we were unable to recover it. 00:28:55.431 [2024-07-25 01:29:17.756517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.431 [2024-07-25 01:29:17.756531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.431 qpair failed and we were unable to recover it. 00:28:55.431 [2024-07-25 01:29:17.757024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.431 [2024-07-25 01:29:17.757038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.431 qpair failed and we were unable to recover it. 00:28:55.431 [2024-07-25 01:29:17.757392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.431 [2024-07-25 01:29:17.757408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.431 qpair failed and we were unable to recover it. 00:28:55.431 [2024-07-25 01:29:17.757876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.431 [2024-07-25 01:29:17.757890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.431 qpair failed and we were unable to recover it. 00:28:55.431 [2024-07-25 01:29:17.758369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.431 [2024-07-25 01:29:17.758384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.431 qpair failed and we were unable to recover it. 00:28:55.431 [2024-07-25 01:29:17.758782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.431 [2024-07-25 01:29:17.758796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.431 qpair failed and we were unable to recover it. 00:28:55.431 [2024-07-25 01:29:17.759280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.431 [2024-07-25 01:29:17.759295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.431 qpair failed and we were unable to recover it. 00:28:55.431 [2024-07-25 01:29:17.759694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.431 [2024-07-25 01:29:17.759709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.431 qpair failed and we were unable to recover it. 00:28:55.431 [2024-07-25 01:29:17.760194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.431 [2024-07-25 01:29:17.760209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.431 qpair failed and we were unable to recover it. 00:28:55.431 [2024-07-25 01:29:17.760565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.431 [2024-07-25 01:29:17.760580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.431 qpair failed and we were unable to recover it. 00:28:55.431 [2024-07-25 01:29:17.761066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.431 [2024-07-25 01:29:17.761082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.431 qpair failed and we were unable to recover it. 00:28:55.431 [2024-07-25 01:29:17.761482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.431 [2024-07-25 01:29:17.761497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.431 qpair failed and we were unable to recover it. 00:28:55.431 [2024-07-25 01:29:17.761982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.431 [2024-07-25 01:29:17.761996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.431 qpair failed and we were unable to recover it. 00:28:55.431 [2024-07-25 01:29:17.762424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.431 [2024-07-25 01:29:17.762439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.431 qpair failed and we were unable to recover it. 00:28:55.431 [2024-07-25 01:29:17.762876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.432 [2024-07-25 01:29:17.762891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.432 qpair failed and we were unable to recover it. 00:28:55.432 [2024-07-25 01:29:17.763293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.432 [2024-07-25 01:29:17.763309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.432 qpair failed and we were unable to recover it. 00:28:55.432 [2024-07-25 01:29:17.763665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.432 [2024-07-25 01:29:17.763680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.432 qpair failed and we were unable to recover it. 00:28:55.432 [2024-07-25 01:29:17.764140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.432 [2024-07-25 01:29:17.764156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.432 qpair failed and we were unable to recover it. 00:28:55.432 [2024-07-25 01:29:17.764565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.432 [2024-07-25 01:29:17.764579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.432 qpair failed and we were unable to recover it. 00:28:55.432 [2024-07-25 01:29:17.765041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.432 [2024-07-25 01:29:17.765067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.432 qpair failed and we were unable to recover it. 00:28:55.432 [2024-07-25 01:29:17.765414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.432 [2024-07-25 01:29:17.765429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.432 qpair failed and we were unable to recover it. 00:28:55.432 [2024-07-25 01:29:17.765912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.432 [2024-07-25 01:29:17.765927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.432 qpair failed and we were unable to recover it. 00:28:55.432 [2024-07-25 01:29:17.766273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.432 [2024-07-25 01:29:17.766288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.432 qpair failed and we were unable to recover it. 00:28:55.432 [2024-07-25 01:29:17.766780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.432 [2024-07-25 01:29:17.766795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.432 qpair failed and we were unable to recover it. 00:28:55.432 [2024-07-25 01:29:17.767196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.432 [2024-07-25 01:29:17.767211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.432 qpair failed and we were unable to recover it. 00:28:55.432 [2024-07-25 01:29:17.767673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.432 [2024-07-25 01:29:17.767687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.432 qpair failed and we were unable to recover it. 00:28:55.432 [2024-07-25 01:29:17.768174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.432 [2024-07-25 01:29:17.768190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.432 qpair failed and we were unable to recover it. 00:28:55.432 [2024-07-25 01:29:17.768655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.432 [2024-07-25 01:29:17.768670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.432 qpair failed and we were unable to recover it. 00:28:55.432 [2024-07-25 01:29:17.769066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.432 [2024-07-25 01:29:17.769081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.432 qpair failed and we were unable to recover it. 00:28:55.432 [2024-07-25 01:29:17.769491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.432 [2024-07-25 01:29:17.769506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.432 qpair failed and we were unable to recover it. 00:28:55.432 [2024-07-25 01:29:17.769945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.432 [2024-07-25 01:29:17.769961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.432 qpair failed and we were unable to recover it. 00:28:55.432 [2024-07-25 01:29:17.770299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.432 [2024-07-25 01:29:17.770315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.432 qpair failed and we were unable to recover it. 00:28:55.432 [2024-07-25 01:29:17.770779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.432 [2024-07-25 01:29:17.770794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.432 qpair failed and we were unable to recover it. 00:28:55.432 [2024-07-25 01:29:17.771145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.432 [2024-07-25 01:29:17.771160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.432 qpair failed and we were unable to recover it. 00:28:55.432 [2024-07-25 01:29:17.771577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.432 [2024-07-25 01:29:17.771592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.432 qpair failed and we were unable to recover it. 00:28:55.432 [2024-07-25 01:29:17.772015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.432 [2024-07-25 01:29:17.772030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.432 qpair failed and we were unable to recover it. 00:28:55.432 [2024-07-25 01:29:17.772445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.432 [2024-07-25 01:29:17.772464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.432 qpair failed and we were unable to recover it. 00:28:55.432 [2024-07-25 01:29:17.772861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.432 [2024-07-25 01:29:17.772875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.432 qpair failed and we were unable to recover it. 00:28:55.432 [2024-07-25 01:29:17.773346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.432 [2024-07-25 01:29:17.773362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.432 qpair failed and we were unable to recover it. 00:28:55.432 [2024-07-25 01:29:17.773769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.432 [2024-07-25 01:29:17.773783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.432 qpair failed and we were unable to recover it. 00:28:55.432 [2024-07-25 01:29:17.774243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.432 [2024-07-25 01:29:17.774259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.432 qpair failed and we were unable to recover it. 00:28:55.432 [2024-07-25 01:29:17.774721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.432 [2024-07-25 01:29:17.774735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.432 qpair failed and we were unable to recover it. 00:28:55.432 [2024-07-25 01:29:17.775170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.432 [2024-07-25 01:29:17.775188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.432 qpair failed and we were unable to recover it. 00:28:55.432 [2024-07-25 01:29:17.775544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.432 [2024-07-25 01:29:17.775560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.432 qpair failed and we were unable to recover it. 00:28:55.432 [2024-07-25 01:29:17.776034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.432 [2024-07-25 01:29:17.776054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.432 qpair failed and we were unable to recover it. 00:28:55.432 [2024-07-25 01:29:17.776280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.432 [2024-07-25 01:29:17.776295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.432 qpair failed and we were unable to recover it. 00:28:55.432 [2024-07-25 01:29:17.776759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.433 [2024-07-25 01:29:17.776774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.433 qpair failed and we were unable to recover it. 00:28:55.433 [2024-07-25 01:29:17.777186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.433 [2024-07-25 01:29:17.777202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.433 qpair failed and we were unable to recover it. 00:28:55.433 [2024-07-25 01:29:17.777686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.433 [2024-07-25 01:29:17.777702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.433 qpair failed and we were unable to recover it. 00:28:55.433 [2024-07-25 01:29:17.778049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.433 [2024-07-25 01:29:17.778065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.433 qpair failed and we were unable to recover it. 00:28:55.433 [2024-07-25 01:29:17.778481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.433 [2024-07-25 01:29:17.778496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.433 qpair failed and we were unable to recover it. 00:28:55.433 [2024-07-25 01:29:17.778903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.433 [2024-07-25 01:29:17.778917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.433 qpair failed and we were unable to recover it. 00:28:55.433 [2024-07-25 01:29:17.779328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.433 [2024-07-25 01:29:17.779344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.433 qpair failed and we were unable to recover it. 00:28:55.433 [2024-07-25 01:29:17.779702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.433 [2024-07-25 01:29:17.779717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.433 qpair failed and we were unable to recover it. 00:28:55.433 [2024-07-25 01:29:17.780206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.433 [2024-07-25 01:29:17.780221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.433 qpair failed and we were unable to recover it. 00:28:55.433 [2024-07-25 01:29:17.780683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.433 [2024-07-25 01:29:17.780698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.433 qpair failed and we were unable to recover it. 00:28:55.433 [2024-07-25 01:29:17.781180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.433 [2024-07-25 01:29:17.781195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.433 qpair failed and we were unable to recover it. 00:28:55.433 [2024-07-25 01:29:17.781619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.433 [2024-07-25 01:29:17.781635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.433 qpair failed and we were unable to recover it. 00:28:55.433 [2024-07-25 01:29:17.782121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.433 [2024-07-25 01:29:17.782136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.433 qpair failed and we were unable to recover it. 00:28:55.433 [2024-07-25 01:29:17.782543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.433 [2024-07-25 01:29:17.782558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.433 qpair failed and we were unable to recover it. 00:28:55.433 [2024-07-25 01:29:17.783038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.433 [2024-07-25 01:29:17.783064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.433 qpair failed and we were unable to recover it. 00:28:55.433 [2024-07-25 01:29:17.783492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.433 [2024-07-25 01:29:17.783507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.433 qpair failed and we were unable to recover it. 00:28:55.433 [2024-07-25 01:29:17.783941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.433 [2024-07-25 01:29:17.783956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.433 qpair failed and we were unable to recover it. 00:28:55.433 [2024-07-25 01:29:17.784374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.433 [2024-07-25 01:29:17.784390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.433 qpair failed and we were unable to recover it. 00:28:55.433 [2024-07-25 01:29:17.784747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.433 [2024-07-25 01:29:17.784762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.433 qpair failed and we were unable to recover it. 00:28:55.433 [2024-07-25 01:29:17.785244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.433 [2024-07-25 01:29:17.785259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.433 qpair failed and we were unable to recover it. 00:28:55.433 [2024-07-25 01:29:17.785681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.433 [2024-07-25 01:29:17.785696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.433 qpair failed and we were unable to recover it. 00:28:55.433 [2024-07-25 01:29:17.786104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.433 [2024-07-25 01:29:17.786120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.433 qpair failed and we were unable to recover it. 00:28:55.433 [2024-07-25 01:29:17.786513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.433 [2024-07-25 01:29:17.786528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.433 qpair failed and we were unable to recover it. 00:28:55.433 [2024-07-25 01:29:17.786945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.433 [2024-07-25 01:29:17.786960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.433 qpair failed and we were unable to recover it. 00:28:55.433 [2024-07-25 01:29:17.787449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.433 [2024-07-25 01:29:17.787464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.433 qpair failed and we were unable to recover it. 00:28:55.433 [2024-07-25 01:29:17.787858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.433 [2024-07-25 01:29:17.787873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.433 qpair failed and we were unable to recover it. 00:28:55.433 [2024-07-25 01:29:17.788338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.433 [2024-07-25 01:29:17.788353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.433 qpair failed and we were unable to recover it. 00:28:55.433 [2024-07-25 01:29:17.788833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.433 [2024-07-25 01:29:17.788848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.433 qpair failed and we were unable to recover it. 00:28:55.433 [2024-07-25 01:29:17.789254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.433 [2024-07-25 01:29:17.789269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.433 qpair failed and we were unable to recover it. 00:28:55.433 [2024-07-25 01:29:17.789734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.433 [2024-07-25 01:29:17.789749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.433 qpair failed and we were unable to recover it. 00:28:55.433 [2024-07-25 01:29:17.790234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.433 [2024-07-25 01:29:17.790252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.433 qpair failed and we were unable to recover it. 00:28:55.433 [2024-07-25 01:29:17.790606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.433 [2024-07-25 01:29:17.790621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.433 qpair failed and we were unable to recover it. 00:28:55.433 [2024-07-25 01:29:17.791036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.433 [2024-07-25 01:29:17.791056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.433 qpair failed and we were unable to recover it. 00:28:55.433 [2024-07-25 01:29:17.791429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.433 [2024-07-25 01:29:17.791444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.433 qpair failed and we were unable to recover it. 00:28:55.433 [2024-07-25 01:29:17.791932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.433 [2024-07-25 01:29:17.791947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.433 qpair failed and we were unable to recover it. 00:28:55.433 [2024-07-25 01:29:17.792341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.433 [2024-07-25 01:29:17.792356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.433 qpair failed and we were unable to recover it. 00:28:55.433 [2024-07-25 01:29:17.792649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.433 [2024-07-25 01:29:17.792664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.433 qpair failed and we were unable to recover it. 00:28:55.433 [2024-07-25 01:29:17.793146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.433 [2024-07-25 01:29:17.793161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.433 qpair failed and we were unable to recover it. 00:28:55.433 [2024-07-25 01:29:17.793607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.434 [2024-07-25 01:29:17.793622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.434 qpair failed and we were unable to recover it. 00:28:55.434 [2024-07-25 01:29:17.794021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.434 [2024-07-25 01:29:17.794036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.434 qpair failed and we were unable to recover it. 00:28:55.434 [2024-07-25 01:29:17.794454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.434 [2024-07-25 01:29:17.794469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.434 qpair failed and we were unable to recover it. 00:28:55.434 [2024-07-25 01:29:17.794898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.434 [2024-07-25 01:29:17.794912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.434 qpair failed and we were unable to recover it. 00:28:55.434 [2024-07-25 01:29:17.795315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.434 [2024-07-25 01:29:17.795330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.434 qpair failed and we were unable to recover it. 00:28:55.434 [2024-07-25 01:29:17.795817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.434 [2024-07-25 01:29:17.795831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.434 qpair failed and we were unable to recover it. 00:28:55.434 [2024-07-25 01:29:17.796190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.434 [2024-07-25 01:29:17.796205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.434 qpair failed and we were unable to recover it. 00:28:55.434 [2024-07-25 01:29:17.796559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.434 [2024-07-25 01:29:17.796574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.434 qpair failed and we were unable to recover it. 00:28:55.434 [2024-07-25 01:29:17.797059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.434 [2024-07-25 01:29:17.797074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.434 qpair failed and we were unable to recover it. 00:28:55.434 [2024-07-25 01:29:17.797489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.434 [2024-07-25 01:29:17.797504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.434 qpair failed and we were unable to recover it. 00:28:55.434 [2024-07-25 01:29:17.797989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.434 [2024-07-25 01:29:17.798004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.434 qpair failed and we were unable to recover it. 00:28:55.434 [2024-07-25 01:29:17.798355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.434 [2024-07-25 01:29:17.798371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.434 qpair failed and we were unable to recover it. 00:28:55.434 [2024-07-25 01:29:17.798770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.434 [2024-07-25 01:29:17.798784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.434 qpair failed and we were unable to recover it. 00:28:55.434 [2024-07-25 01:29:17.799280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.434 [2024-07-25 01:29:17.799296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.434 qpair failed and we were unable to recover it. 00:28:55.434 [2024-07-25 01:29:17.799701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.434 [2024-07-25 01:29:17.799716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.434 qpair failed and we were unable to recover it. 00:28:55.434 [2024-07-25 01:29:17.800130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.434 [2024-07-25 01:29:17.800146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.434 qpair failed and we were unable to recover it. 00:28:55.434 [2024-07-25 01:29:17.800580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.434 [2024-07-25 01:29:17.800595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.434 qpair failed and we were unable to recover it. 00:28:55.434 [2024-07-25 01:29:17.801006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.434 [2024-07-25 01:29:17.801020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.434 qpair failed and we were unable to recover it. 00:28:55.434 [2024-07-25 01:29:17.801487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.434 [2024-07-25 01:29:17.801502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.434 qpair failed and we were unable to recover it. 00:28:55.434 [2024-07-25 01:29:17.801850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.434 [2024-07-25 01:29:17.801865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.434 qpair failed and we were unable to recover it. 00:28:55.434 [2024-07-25 01:29:17.802275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.434 [2024-07-25 01:29:17.802291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.434 qpair failed and we were unable to recover it. 00:28:55.434 [2024-07-25 01:29:17.802721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.434 [2024-07-25 01:29:17.802736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.434 qpair failed and we were unable to recover it. 00:28:55.434 [2024-07-25 01:29:17.803150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.434 [2024-07-25 01:29:17.803165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.434 qpair failed and we were unable to recover it. 00:28:55.434 [2024-07-25 01:29:17.803598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.434 [2024-07-25 01:29:17.803612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.434 qpair failed and we were unable to recover it. 00:28:55.434 [2024-07-25 01:29:17.803912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.434 [2024-07-25 01:29:17.803927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.434 qpair failed and we were unable to recover it. 00:28:55.434 [2024-07-25 01:29:17.804412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.434 [2024-07-25 01:29:17.804428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.434 qpair failed and we were unable to recover it. 00:28:55.434 [2024-07-25 01:29:17.804912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.434 [2024-07-25 01:29:17.804926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.434 qpair failed and we were unable to recover it. 00:28:55.434 [2024-07-25 01:29:17.805387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.434 [2024-07-25 01:29:17.805403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.434 qpair failed and we were unable to recover it. 00:28:55.434 [2024-07-25 01:29:17.805742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.434 [2024-07-25 01:29:17.805755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.434 qpair failed and we were unable to recover it. 00:28:55.434 [2024-07-25 01:29:17.806217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.434 [2024-07-25 01:29:17.806234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.434 qpair failed and we were unable to recover it. 00:28:55.434 [2024-07-25 01:29:17.806718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.434 [2024-07-25 01:29:17.806733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.434 qpair failed and we were unable to recover it. 00:28:55.434 [2024-07-25 01:29:17.807144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.434 [2024-07-25 01:29:17.807159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.434 qpair failed and we were unable to recover it. 00:28:55.434 [2024-07-25 01:29:17.807312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.434 [2024-07-25 01:29:17.807329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.434 qpair failed and we were unable to recover it. 00:28:55.434 [2024-07-25 01:29:17.807675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.434 [2024-07-25 01:29:17.807688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.434 qpair failed and we were unable to recover it. 00:28:55.434 [2024-07-25 01:29:17.808131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.434 [2024-07-25 01:29:17.808146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.434 qpair failed and we were unable to recover it. 00:28:55.434 [2024-07-25 01:29:17.808541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.434 [2024-07-25 01:29:17.808556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.434 qpair failed and we were unable to recover it. 00:28:55.434 [2024-07-25 01:29:17.809040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.434 [2024-07-25 01:29:17.809060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.434 qpair failed and we were unable to recover it. 00:28:55.434 [2024-07-25 01:29:17.809464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.434 [2024-07-25 01:29:17.809479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.434 qpair failed and we were unable to recover it. 00:28:55.435 [2024-07-25 01:29:17.809888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.435 [2024-07-25 01:29:17.809903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.435 qpair failed and we were unable to recover it. 00:28:55.435 [2024-07-25 01:29:17.810307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.435 [2024-07-25 01:29:17.810323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.435 qpair failed and we were unable to recover it. 00:28:55.435 [2024-07-25 01:29:17.810682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.435 [2024-07-25 01:29:17.810696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.435 qpair failed and we were unable to recover it. 00:28:55.435 [2024-07-25 01:29:17.811157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.435 [2024-07-25 01:29:17.811173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.435 qpair failed and we were unable to recover it. 00:28:55.435 [2024-07-25 01:29:17.811658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.435 [2024-07-25 01:29:17.811672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.435 qpair failed and we were unable to recover it. 00:28:55.435 [2024-07-25 01:29:17.812088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.435 [2024-07-25 01:29:17.812104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.435 qpair failed and we were unable to recover it. 00:28:55.435 [2024-07-25 01:29:17.812510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.435 [2024-07-25 01:29:17.812524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.435 qpair failed and we were unable to recover it. 00:28:55.435 [2024-07-25 01:29:17.812804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.435 [2024-07-25 01:29:17.812819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.435 qpair failed and we were unable to recover it. 00:28:55.435 [2024-07-25 01:29:17.813332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.435 [2024-07-25 01:29:17.813347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.435 qpair failed and we were unable to recover it. 00:28:55.435 [2024-07-25 01:29:17.813823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.435 [2024-07-25 01:29:17.813837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.435 qpair failed and we were unable to recover it. 00:28:55.435 [2024-07-25 01:29:17.814248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.435 [2024-07-25 01:29:17.814264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.435 qpair failed and we were unable to recover it. 00:28:55.435 [2024-07-25 01:29:17.814680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.435 [2024-07-25 01:29:17.814695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.435 qpair failed and we were unable to recover it. 00:28:55.435 [2024-07-25 01:29:17.815112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.435 [2024-07-25 01:29:17.815127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.435 qpair failed and we were unable to recover it. 00:28:55.435 [2024-07-25 01:29:17.815614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.435 [2024-07-25 01:29:17.815629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.435 qpair failed and we were unable to recover it. 00:28:55.435 [2024-07-25 01:29:17.816112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.435 [2024-07-25 01:29:17.816127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.435 qpair failed and we were unable to recover it. 00:28:55.435 [2024-07-25 01:29:17.816487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.435 [2024-07-25 01:29:17.816502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.435 qpair failed and we were unable to recover it. 00:28:55.435 [2024-07-25 01:29:17.816967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.435 [2024-07-25 01:29:17.816982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.435 qpair failed and we were unable to recover it. 00:28:55.435 [2024-07-25 01:29:17.817623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.435 [2024-07-25 01:29:17.817638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.435 qpair failed and we were unable to recover it. 00:28:55.435 [2024-07-25 01:29:17.818063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.435 [2024-07-25 01:29:17.818079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.435 qpair failed and we were unable to recover it. 00:28:55.435 [2024-07-25 01:29:17.818559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.435 [2024-07-25 01:29:17.818574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.435 qpair failed and we were unable to recover it. 00:28:55.435 [2024-07-25 01:29:17.819007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.435 [2024-07-25 01:29:17.819022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.435 qpair failed and we were unable to recover it. 00:28:55.435 [2024-07-25 01:29:17.819495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.435 [2024-07-25 01:29:17.819511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.435 qpair failed and we were unable to recover it. 00:28:55.435 [2024-07-25 01:29:17.819910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.435 [2024-07-25 01:29:17.819925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.435 qpair failed and we were unable to recover it. 00:28:55.435 [2024-07-25 01:29:17.820387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.435 [2024-07-25 01:29:17.820403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.435 qpair failed and we were unable to recover it. 00:28:55.435 [2024-07-25 01:29:17.820865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.435 [2024-07-25 01:29:17.820880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.435 qpair failed and we were unable to recover it. 00:28:55.435 [2024-07-25 01:29:17.821292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.435 [2024-07-25 01:29:17.821309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.435 qpair failed and we were unable to recover it. 00:28:55.435 [2024-07-25 01:29:17.821796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.435 [2024-07-25 01:29:17.821810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.435 qpair failed and we were unable to recover it. 00:28:55.435 [2024-07-25 01:29:17.822274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.435 [2024-07-25 01:29:17.822290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.435 qpair failed and we were unable to recover it. 00:28:55.435 [2024-07-25 01:29:17.822698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.435 [2024-07-25 01:29:17.822713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.435 qpair failed and we were unable to recover it. 00:28:55.435 [2024-07-25 01:29:17.823120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.435 [2024-07-25 01:29:17.823135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.435 qpair failed and we were unable to recover it. 00:28:55.435 [2024-07-25 01:29:17.823617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.435 [2024-07-25 01:29:17.823632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.435 qpair failed and we were unable to recover it. 00:28:55.435 [2024-07-25 01:29:17.824066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.435 [2024-07-25 01:29:17.824082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.435 qpair failed and we were unable to recover it. 00:28:55.435 [2024-07-25 01:29:17.824561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.435 [2024-07-25 01:29:17.824576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.435 qpair failed and we were unable to recover it. 00:28:55.435 [2024-07-25 01:29:17.825003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.435 [2024-07-25 01:29:17.825018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.435 qpair failed and we were unable to recover it. 00:28:55.435 [2024-07-25 01:29:17.825507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.435 [2024-07-25 01:29:17.825525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.435 qpair failed and we were unable to recover it. 00:28:55.435 [2024-07-25 01:29:17.826012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.435 [2024-07-25 01:29:17.826027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.435 qpair failed and we were unable to recover it. 00:28:55.435 [2024-07-25 01:29:17.826463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.435 [2024-07-25 01:29:17.826479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.435 qpair failed and we were unable to recover it. 00:28:55.435 [2024-07-25 01:29:17.826882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.435 [2024-07-25 01:29:17.826897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.435 qpair failed and we were unable to recover it. 00:28:55.436 [2024-07-25 01:29:17.827397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.436 [2024-07-25 01:29:17.827413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.436 qpair failed and we were unable to recover it. 00:28:55.436 [2024-07-25 01:29:17.827820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.436 [2024-07-25 01:29:17.827835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.436 qpair failed and we were unable to recover it. 00:28:55.436 [2024-07-25 01:29:17.828188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.436 [2024-07-25 01:29:17.828203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.436 qpair failed and we were unable to recover it. 00:28:55.436 [2024-07-25 01:29:17.828690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.436 [2024-07-25 01:29:17.828706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.436 qpair failed and we were unable to recover it. 00:28:55.436 [2024-07-25 01:29:17.829140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.436 [2024-07-25 01:29:17.829156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.436 qpair failed and we were unable to recover it. 00:28:55.436 [2024-07-25 01:29:17.829590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.436 [2024-07-25 01:29:17.829605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.436 qpair failed and we were unable to recover it. 00:28:55.436 [2024-07-25 01:29:17.830006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.436 [2024-07-25 01:29:17.830020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.436 qpair failed and we were unable to recover it. 00:28:55.436 [2024-07-25 01:29:17.830490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.436 [2024-07-25 01:29:17.830505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.436 qpair failed and we were unable to recover it. 00:28:55.436 [2024-07-25 01:29:17.830980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.436 [2024-07-25 01:29:17.830994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.436 qpair failed and we were unable to recover it. 00:28:55.436 [2024-07-25 01:29:17.831457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.436 [2024-07-25 01:29:17.831472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.436 qpair failed and we were unable to recover it. 00:28:55.436 [2024-07-25 01:29:17.831887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.436 [2024-07-25 01:29:17.831902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.436 qpair failed and we were unable to recover it. 00:28:55.436 [2024-07-25 01:29:17.832389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.436 [2024-07-25 01:29:17.832404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.436 qpair failed and we were unable to recover it. 00:28:55.436 [2024-07-25 01:29:17.832823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.436 [2024-07-25 01:29:17.832838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.436 qpair failed and we were unable to recover it. 00:28:55.436 [2024-07-25 01:29:17.833251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.436 [2024-07-25 01:29:17.833267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.436 qpair failed and we were unable to recover it. 00:28:55.436 [2024-07-25 01:29:17.833672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.436 [2024-07-25 01:29:17.833687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.436 qpair failed and we were unable to recover it. 00:28:55.436 [2024-07-25 01:29:17.834147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.436 [2024-07-25 01:29:17.834163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.436 qpair failed and we were unable to recover it. 00:28:55.436 [2024-07-25 01:29:17.834562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.436 [2024-07-25 01:29:17.834578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.436 qpair failed and we were unable to recover it. 00:28:55.436 [2024-07-25 01:29:17.835065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.436 [2024-07-25 01:29:17.835081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.436 qpair failed and we were unable to recover it. 00:28:55.436 [2024-07-25 01:29:17.835478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.436 [2024-07-25 01:29:17.835493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.436 qpair failed and we were unable to recover it. 00:28:55.436 [2024-07-25 01:29:17.835976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.436 [2024-07-25 01:29:17.835991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.436 qpair failed and we were unable to recover it. 00:28:55.436 [2024-07-25 01:29:17.836347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.436 [2024-07-25 01:29:17.836362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.436 qpair failed and we were unable to recover it. 00:28:55.436 [2024-07-25 01:29:17.836822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.436 [2024-07-25 01:29:17.836836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.436 qpair failed and we were unable to recover it. 00:28:55.436 [2024-07-25 01:29:17.837297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.436 [2024-07-25 01:29:17.837312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.436 qpair failed and we were unable to recover it. 00:28:55.436 [2024-07-25 01:29:17.837751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.436 [2024-07-25 01:29:17.837766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.436 qpair failed and we were unable to recover it. 00:28:55.436 [2024-07-25 01:29:17.838196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.436 [2024-07-25 01:29:17.838211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.436 qpair failed and we were unable to recover it. 00:28:55.436 [2024-07-25 01:29:17.838640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.436 [2024-07-25 01:29:17.838655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.436 qpair failed and we were unable to recover it. 00:28:55.436 [2024-07-25 01:29:17.839133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.436 [2024-07-25 01:29:17.839148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.436 qpair failed and we were unable to recover it. 00:28:55.436 [2024-07-25 01:29:17.839575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.436 [2024-07-25 01:29:17.839590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.436 qpair failed and we were unable to recover it. 00:28:55.436 [2024-07-25 01:29:17.839989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.436 [2024-07-25 01:29:17.840003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.436 qpair failed and we were unable to recover it. 00:28:55.436 [2024-07-25 01:29:17.840425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.436 [2024-07-25 01:29:17.840441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.436 qpair failed and we were unable to recover it. 00:28:55.436 [2024-07-25 01:29:17.840816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.436 [2024-07-25 01:29:17.840831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.436 qpair failed and we were unable to recover it. 00:28:55.436 [2024-07-25 01:29:17.841254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.436 [2024-07-25 01:29:17.841269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.436 qpair failed and we were unable to recover it. 00:28:55.436 [2024-07-25 01:29:17.841730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.436 [2024-07-25 01:29:17.841745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.436 qpair failed and we were unable to recover it. 00:28:55.436 [2024-07-25 01:29:17.842108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.436 [2024-07-25 01:29:17.842123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.436 qpair failed and we were unable to recover it. 00:28:55.436 [2024-07-25 01:29:17.842538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.436 [2024-07-25 01:29:17.842553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.436 qpair failed and we were unable to recover it. 00:28:55.436 [2024-07-25 01:29:17.842975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.437 [2024-07-25 01:29:17.842990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.437 qpair failed and we were unable to recover it. 00:28:55.437 [2024-07-25 01:29:17.843420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.437 [2024-07-25 01:29:17.843439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.437 qpair failed and we were unable to recover it. 00:28:55.437 [2024-07-25 01:29:17.843902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.437 [2024-07-25 01:29:17.843917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.437 qpair failed and we were unable to recover it. 00:28:55.437 [2024-07-25 01:29:17.844380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.437 [2024-07-25 01:29:17.844396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.437 qpair failed and we were unable to recover it. 00:28:55.437 [2024-07-25 01:29:17.844793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.437 [2024-07-25 01:29:17.844808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.437 qpair failed and we were unable to recover it. 00:28:55.437 [2024-07-25 01:29:17.845229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.437 [2024-07-25 01:29:17.845244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.437 qpair failed and we were unable to recover it. 00:28:55.437 [2024-07-25 01:29:17.845689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.437 [2024-07-25 01:29:17.845704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.437 qpair failed and we were unable to recover it. 00:28:55.437 [2024-07-25 01:29:17.846189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.437 [2024-07-25 01:29:17.846204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.437 qpair failed and we were unable to recover it. 00:28:55.437 [2024-07-25 01:29:17.846570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.437 [2024-07-25 01:29:17.846585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.437 qpair failed and we were unable to recover it. 00:28:55.437 [2024-07-25 01:29:17.847069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.437 [2024-07-25 01:29:17.847084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.437 qpair failed and we were unable to recover it. 00:28:55.437 [2024-07-25 01:29:17.847501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.437 [2024-07-25 01:29:17.847516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.437 qpair failed and we were unable to recover it. 00:28:55.437 [2024-07-25 01:29:17.847976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.437 [2024-07-25 01:29:17.847990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.437 qpair failed and we were unable to recover it. 00:28:55.437 [2024-07-25 01:29:17.848402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.437 [2024-07-25 01:29:17.848417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.437 qpair failed and we were unable to recover it. 00:28:55.437 [2024-07-25 01:29:17.848903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.437 [2024-07-25 01:29:17.848918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.437 qpair failed and we were unable to recover it. 00:28:55.437 [2024-07-25 01:29:17.849406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.437 [2024-07-25 01:29:17.849421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.437 qpair failed and we were unable to recover it. 00:28:55.437 [2024-07-25 01:29:17.849836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.437 [2024-07-25 01:29:17.849851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.437 qpair failed and we were unable to recover it. 00:28:55.437 [2024-07-25 01:29:17.850340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.437 [2024-07-25 01:29:17.850356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.437 qpair failed and we were unable to recover it. 00:28:55.437 [2024-07-25 01:29:17.850841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.437 [2024-07-25 01:29:17.850855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.437 qpair failed and we were unable to recover it. 00:28:55.437 [2024-07-25 01:29:17.851269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.437 [2024-07-25 01:29:17.851284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.437 qpair failed and we were unable to recover it. 00:28:55.437 [2024-07-25 01:29:17.851718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.437 [2024-07-25 01:29:17.851733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.437 qpair failed and we were unable to recover it. 00:28:55.437 [2024-07-25 01:29:17.852144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.437 [2024-07-25 01:29:17.852159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.437 qpair failed and we were unable to recover it. 00:28:55.437 [2024-07-25 01:29:17.852641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.437 [2024-07-25 01:29:17.852656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.437 qpair failed and we were unable to recover it. 00:28:55.437 [2024-07-25 01:29:17.853020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.437 [2024-07-25 01:29:17.853034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.437 qpair failed and we were unable to recover it. 00:28:55.437 [2024-07-25 01:29:17.853351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.437 [2024-07-25 01:29:17.853366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.437 qpair failed and we were unable to recover it. 00:28:55.437 [2024-07-25 01:29:17.853772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.437 [2024-07-25 01:29:17.853786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.437 qpair failed and we were unable to recover it. 00:28:55.437 [2024-07-25 01:29:17.854198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.437 [2024-07-25 01:29:17.854213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.437 qpair failed and we were unable to recover it. 00:28:55.437 [2024-07-25 01:29:17.854610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.437 [2024-07-25 01:29:17.854625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.437 qpair failed and we were unable to recover it. 00:28:55.437 [2024-07-25 01:29:17.855069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.437 [2024-07-25 01:29:17.855084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.437 qpair failed and we were unable to recover it. 00:28:55.437 [2024-07-25 01:29:17.855574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.437 [2024-07-25 01:29:17.855589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.437 qpair failed and we were unable to recover it. 00:28:55.437 [2024-07-25 01:29:17.855981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.437 [2024-07-25 01:29:17.855996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.437 qpair failed and we were unable to recover it. 00:28:55.437 [2024-07-25 01:29:17.856485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.437 [2024-07-25 01:29:17.856500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.437 qpair failed and we were unable to recover it. 00:28:55.437 [2024-07-25 01:29:17.856864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.437 [2024-07-25 01:29:17.856880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.437 qpair failed and we were unable to recover it. 00:28:55.437 [2024-07-25 01:29:17.857343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.437 [2024-07-25 01:29:17.857359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.437 qpair failed and we were unable to recover it. 00:28:55.437 [2024-07-25 01:29:17.857762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.437 [2024-07-25 01:29:17.857777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.437 qpair failed and we were unable to recover it. 00:28:55.437 [2024-07-25 01:29:17.858202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.437 [2024-07-25 01:29:17.858217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.437 qpair failed and we were unable to recover it. 00:28:55.437 [2024-07-25 01:29:17.858632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.438 [2024-07-25 01:29:17.858647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.438 qpair failed and we were unable to recover it. 00:28:55.438 [2024-07-25 01:29:17.859131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.438 [2024-07-25 01:29:17.859147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.438 qpair failed and we were unable to recover it. 00:28:55.438 [2024-07-25 01:29:17.859563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.438 [2024-07-25 01:29:17.859578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.438 qpair failed and we were unable to recover it. 00:28:55.438 [2024-07-25 01:29:17.859789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.438 [2024-07-25 01:29:17.859804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.438 qpair failed and we were unable to recover it. 00:28:55.438 [2024-07-25 01:29:17.860220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.438 [2024-07-25 01:29:17.860235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.438 qpair failed and we were unable to recover it. 00:28:55.438 [2024-07-25 01:29:17.860520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.438 [2024-07-25 01:29:17.860535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.438 qpair failed and we were unable to recover it. 00:28:55.438 [2024-07-25 01:29:17.860996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.438 [2024-07-25 01:29:17.861016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.438 qpair failed and we were unable to recover it. 00:28:55.438 [2024-07-25 01:29:17.861460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.438 [2024-07-25 01:29:17.861476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.438 qpair failed and we were unable to recover it. 00:28:55.438 [2024-07-25 01:29:17.861754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.438 [2024-07-25 01:29:17.861768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.438 qpair failed and we were unable to recover it. 00:28:55.438 [2024-07-25 01:29:17.862195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.438 [2024-07-25 01:29:17.862210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.438 qpair failed and we were unable to recover it. 00:28:55.438 [2024-07-25 01:29:17.862651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.438 [2024-07-25 01:29:17.862666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.438 qpair failed and we were unable to recover it. 00:28:55.438 [2024-07-25 01:29:17.863092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.438 [2024-07-25 01:29:17.863108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.438 qpair failed and we were unable to recover it. 00:28:55.438 [2024-07-25 01:29:17.863596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.438 [2024-07-25 01:29:17.863611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.438 qpair failed and we were unable to recover it. 00:28:55.438 [2024-07-25 01:29:17.864053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.438 [2024-07-25 01:29:17.864068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.438 qpair failed and we were unable to recover it. 00:28:55.438 [2024-07-25 01:29:17.864582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.438 [2024-07-25 01:29:17.864597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.438 qpair failed and we were unable to recover it. 00:28:55.438 [2024-07-25 01:29:17.865028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.438 [2024-07-25 01:29:17.865048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.438 qpair failed and we were unable to recover it. 00:28:55.438 [2024-07-25 01:29:17.865544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.438 [2024-07-25 01:29:17.865559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.438 qpair failed and we were unable to recover it. 00:28:55.438 [2024-07-25 01:29:17.866054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.438 [2024-07-25 01:29:17.866070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.438 qpair failed and we were unable to recover it. 00:28:55.438 [2024-07-25 01:29:17.866432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.438 [2024-07-25 01:29:17.866447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.438 qpair failed and we were unable to recover it. 00:28:55.438 [2024-07-25 01:29:17.866858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.438 [2024-07-25 01:29:17.866873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.438 qpair failed and we were unable to recover it. 00:28:55.438 [2024-07-25 01:29:17.867360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.438 [2024-07-25 01:29:17.867376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.438 qpair failed and we were unable to recover it. 00:28:55.438 [2024-07-25 01:29:17.867817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.438 [2024-07-25 01:29:17.867832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.438 qpair failed and we were unable to recover it. 00:28:55.438 [2024-07-25 01:29:17.868318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.438 [2024-07-25 01:29:17.868334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.438 qpair failed and we were unable to recover it. 00:28:55.438 [2024-07-25 01:29:17.868769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.438 [2024-07-25 01:29:17.868784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.438 qpair failed and we were unable to recover it. 00:28:55.438 [2024-07-25 01:29:17.869178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.438 [2024-07-25 01:29:17.869194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.438 qpair failed and we were unable to recover it. 00:28:55.438 [2024-07-25 01:29:17.869636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.438 [2024-07-25 01:29:17.869651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.438 qpair failed and we were unable to recover it. 00:28:55.438 [2024-07-25 01:29:17.870092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.438 [2024-07-25 01:29:17.870108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.438 qpair failed and we were unable to recover it. 00:28:55.438 [2024-07-25 01:29:17.870469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.438 [2024-07-25 01:29:17.870484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.438 qpair failed and we were unable to recover it. 00:28:55.438 [2024-07-25 01:29:17.870909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.438 [2024-07-25 01:29:17.870924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.438 qpair failed and we were unable to recover it. 00:28:55.438 [2024-07-25 01:29:17.871346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.438 [2024-07-25 01:29:17.871361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.438 qpair failed and we were unable to recover it. 00:28:55.438 [2024-07-25 01:29:17.871825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.438 [2024-07-25 01:29:17.871839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.438 qpair failed and we were unable to recover it. 00:28:55.438 [2024-07-25 01:29:17.872248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.438 [2024-07-25 01:29:17.872264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.438 qpair failed and we were unable to recover it. 00:28:55.438 [2024-07-25 01:29:17.872753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.438 [2024-07-25 01:29:17.872768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.438 qpair failed and we were unable to recover it. 00:28:55.438 [2024-07-25 01:29:17.872936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.438 [2024-07-25 01:29:17.872954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.438 qpair failed and we were unable to recover it. 00:28:55.438 [2024-07-25 01:29:17.873314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.438 [2024-07-25 01:29:17.873328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.438 qpair failed and we were unable to recover it. 00:28:55.438 [2024-07-25 01:29:17.873669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.438 [2024-07-25 01:29:17.873684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.438 qpair failed and we were unable to recover it. 00:28:55.438 [2024-07-25 01:29:17.874038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.438 [2024-07-25 01:29:17.874064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.438 qpair failed and we were unable to recover it. 00:28:55.438 [2024-07-25 01:29:17.874499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.438 [2024-07-25 01:29:17.874514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.438 qpair failed and we were unable to recover it. 00:28:55.439 [2024-07-25 01:29:17.874936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.439 [2024-07-25 01:29:17.874950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.439 qpair failed and we were unable to recover it. 00:28:55.439 [2024-07-25 01:29:17.875425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.439 [2024-07-25 01:29:17.875440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.439 qpair failed and we were unable to recover it. 00:28:55.439 [2024-07-25 01:29:17.875926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.439 [2024-07-25 01:29:17.875941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.439 qpair failed and we were unable to recover it. 00:28:55.439 [2024-07-25 01:29:17.876349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.439 [2024-07-25 01:29:17.876364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.439 qpair failed and we were unable to recover it. 00:28:55.439 [2024-07-25 01:29:17.876790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.439 [2024-07-25 01:29:17.876805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.439 qpair failed and we were unable to recover it. 00:28:55.439 [2024-07-25 01:29:17.877222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.439 [2024-07-25 01:29:17.877236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.439 qpair failed and we were unable to recover it. 00:28:55.439 [2024-07-25 01:29:17.877666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.439 [2024-07-25 01:29:17.877680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.439 qpair failed and we were unable to recover it. 00:28:55.439 [2024-07-25 01:29:17.878107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.439 [2024-07-25 01:29:17.878122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.439 qpair failed and we were unable to recover it. 00:28:55.439 [2024-07-25 01:29:17.878552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.439 [2024-07-25 01:29:17.878567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.439 qpair failed and we were unable to recover it. 00:28:55.439 [2024-07-25 01:29:17.878975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.439 [2024-07-25 01:29:17.878989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.439 qpair failed and we were unable to recover it. 00:28:55.439 [2024-07-25 01:29:17.879344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.439 [2024-07-25 01:29:17.879359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.439 qpair failed and we were unable to recover it. 00:28:55.439 [2024-07-25 01:29:17.879864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.439 [2024-07-25 01:29:17.879878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.439 qpair failed and we were unable to recover it. 00:28:55.439 [2024-07-25 01:29:17.880337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.439 [2024-07-25 01:29:17.880353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.439 qpair failed and we were unable to recover it. 00:28:55.439 [2024-07-25 01:29:17.880765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.439 [2024-07-25 01:29:17.880780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.439 qpair failed and we were unable to recover it. 00:28:55.439 [2024-07-25 01:29:17.881503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.439 [2024-07-25 01:29:17.881520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.439 qpair failed and we were unable to recover it. 00:28:55.439 [2024-07-25 01:29:17.881943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.439 [2024-07-25 01:29:17.881958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.439 qpair failed and we were unable to recover it. 00:28:55.439 [2024-07-25 01:29:17.882365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.439 [2024-07-25 01:29:17.882381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.439 qpair failed and we were unable to recover it. 00:28:55.439 [2024-07-25 01:29:17.882821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.439 [2024-07-25 01:29:17.882836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.439 qpair failed and we were unable to recover it. 00:28:55.439 [2024-07-25 01:29:17.883180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.439 [2024-07-25 01:29:17.883195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.439 qpair failed and we were unable to recover it. 00:28:55.439 [2024-07-25 01:29:17.883537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.439 [2024-07-25 01:29:17.883553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.439 qpair failed and we were unable to recover it. 00:28:55.439 [2024-07-25 01:29:17.883913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.439 [2024-07-25 01:29:17.883928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.439 qpair failed and we were unable to recover it. 00:28:55.439 [2024-07-25 01:29:17.884344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.439 [2024-07-25 01:29:17.884359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.439 qpair failed and we were unable to recover it. 00:28:55.439 [2024-07-25 01:29:17.884777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.439 [2024-07-25 01:29:17.884792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.439 qpair failed and we were unable to recover it. 00:28:55.439 [2024-07-25 01:29:17.885275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.439 [2024-07-25 01:29:17.885291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.439 qpair failed and we were unable to recover it. 00:28:55.439 [2024-07-25 01:29:17.885653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.439 [2024-07-25 01:29:17.885668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.439 qpair failed and we were unable to recover it. 00:28:55.439 [2024-07-25 01:29:17.886072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.439 [2024-07-25 01:29:17.886087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.439 qpair failed and we were unable to recover it. 00:28:55.439 [2024-07-25 01:29:17.886487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.439 [2024-07-25 01:29:17.886502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.439 qpair failed and we were unable to recover it. 00:28:55.439 [2024-07-25 01:29:17.886919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.439 [2024-07-25 01:29:17.886934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.439 qpair failed and we were unable to recover it. 00:28:55.439 [2024-07-25 01:29:17.887351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.439 [2024-07-25 01:29:17.887367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.439 qpair failed and we were unable to recover it. 00:28:55.439 [2024-07-25 01:29:17.887807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.439 [2024-07-25 01:29:17.887822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.439 qpair failed and we were unable to recover it. 00:28:55.439 [2024-07-25 01:29:17.888167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.439 [2024-07-25 01:29:17.888181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.439 qpair failed and we were unable to recover it. 00:28:55.439 [2024-07-25 01:29:17.888586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.439 [2024-07-25 01:29:17.888601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.439 qpair failed and we were unable to recover it. 00:28:55.439 [2024-07-25 01:29:17.889085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.439 [2024-07-25 01:29:17.889100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.439 qpair failed and we were unable to recover it. 00:28:55.439 [2024-07-25 01:29:17.889507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.439 [2024-07-25 01:29:17.889523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.439 qpair failed and we were unable to recover it. 00:28:55.439 [2024-07-25 01:29:17.889668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.439 [2024-07-25 01:29:17.889683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.439 qpair failed and we were unable to recover it. 00:28:55.439 [2024-07-25 01:29:17.890190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.439 [2024-07-25 01:29:17.890207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.439 qpair failed and we were unable to recover it. 00:28:55.439 [2024-07-25 01:29:17.890604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.439 [2024-07-25 01:29:17.890619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.439 qpair failed and we were unable to recover it. 00:28:55.439 [2024-07-25 01:29:17.891036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.439 [2024-07-25 01:29:17.891073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.439 qpair failed and we were unable to recover it. 00:28:55.440 [2024-07-25 01:29:17.891539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.440 [2024-07-25 01:29:17.891553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.440 qpair failed and we were unable to recover it. 00:28:55.440 [2024-07-25 01:29:17.892198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.440 [2024-07-25 01:29:17.892214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.440 qpair failed and we were unable to recover it. 00:28:55.440 [2024-07-25 01:29:17.892696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.440 [2024-07-25 01:29:17.892711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.440 qpair failed and we were unable to recover it. 00:28:55.440 [2024-07-25 01:29:17.893067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.440 [2024-07-25 01:29:17.893082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.440 qpair failed and we were unable to recover it. 00:28:55.440 [2024-07-25 01:29:17.893528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.440 [2024-07-25 01:29:17.893543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.440 qpair failed and we were unable to recover it. 00:28:55.440 [2024-07-25 01:29:17.893955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.440 [2024-07-25 01:29:17.893970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.440 qpair failed and we were unable to recover it. 00:28:55.440 [2024-07-25 01:29:17.894469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.440 [2024-07-25 01:29:17.894485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.440 qpair failed and we were unable to recover it. 00:28:55.440 [2024-07-25 01:29:17.894972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.440 [2024-07-25 01:29:17.894987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.440 qpair failed and we were unable to recover it. 00:28:55.440 [2024-07-25 01:29:17.895340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.440 [2024-07-25 01:29:17.895355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.440 qpair failed and we were unable to recover it. 00:28:55.440 [2024-07-25 01:29:17.895769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.440 [2024-07-25 01:29:17.895784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.440 qpair failed and we were unable to recover it. 00:28:55.440 [2024-07-25 01:29:17.896188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.440 [2024-07-25 01:29:17.896203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.440 qpair failed and we were unable to recover it. 00:28:55.440 [2024-07-25 01:29:17.896877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.440 [2024-07-25 01:29:17.896893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.440 qpair failed and we were unable to recover it. 00:28:55.440 [2024-07-25 01:29:17.897314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.440 [2024-07-25 01:29:17.897329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.440 qpair failed and we were unable to recover it. 00:28:55.440 [2024-07-25 01:29:17.897792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.440 [2024-07-25 01:29:17.897807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.440 qpair failed and we were unable to recover it. 00:28:55.440 [2024-07-25 01:29:17.898284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.440 [2024-07-25 01:29:17.898299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.440 qpair failed and we were unable to recover it. 00:28:55.440 [2024-07-25 01:29:17.898707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.440 [2024-07-25 01:29:17.898722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.440 qpair failed and we were unable to recover it. 00:28:55.440 [2024-07-25 01:29:17.899060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.440 [2024-07-25 01:29:17.899075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.440 qpair failed and we were unable to recover it. 00:28:55.440 [2024-07-25 01:29:17.899499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.440 [2024-07-25 01:29:17.899514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.440 qpair failed and we were unable to recover it. 00:28:55.440 [2024-07-25 01:29:17.899977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.440 [2024-07-25 01:29:17.899992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.440 qpair failed and we were unable to recover it. 00:28:55.440 [2024-07-25 01:29:17.900473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.440 [2024-07-25 01:29:17.900489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.440 qpair failed and we were unable to recover it. 00:28:55.440 [2024-07-25 01:29:17.900885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.440 [2024-07-25 01:29:17.900900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.440 qpair failed and we were unable to recover it. 00:28:55.440 [2024-07-25 01:29:17.901256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.440 [2024-07-25 01:29:17.901272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.440 qpair failed and we were unable to recover it. 00:28:55.440 [2024-07-25 01:29:17.901697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.440 [2024-07-25 01:29:17.901712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.440 qpair failed and we were unable to recover it. 00:28:55.440 [2024-07-25 01:29:17.902130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.440 [2024-07-25 01:29:17.902146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.440 qpair failed and we were unable to recover it. 00:28:55.440 [2024-07-25 01:29:17.902487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.440 [2024-07-25 01:29:17.902502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.440 qpair failed and we were unable to recover it. 00:28:55.440 [2024-07-25 01:29:17.902844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.440 [2024-07-25 01:29:17.902859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.440 qpair failed and we were unable to recover it. 00:28:55.440 [2024-07-25 01:29:17.903273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.440 [2024-07-25 01:29:17.903287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.440 qpair failed and we were unable to recover it. 00:28:55.440 [2024-07-25 01:29:17.903796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.440 [2024-07-25 01:29:17.903811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.440 qpair failed and we were unable to recover it. 00:28:55.440 [2024-07-25 01:29:17.904224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.440 [2024-07-25 01:29:17.904240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.440 qpair failed and we were unable to recover it. 00:28:55.440 [2024-07-25 01:29:17.904706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.440 [2024-07-25 01:29:17.904721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.440 qpair failed and we were unable to recover it. 00:28:55.440 [2024-07-25 01:29:17.905026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.440 [2024-07-25 01:29:17.905041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.440 qpair failed and we were unable to recover it. 00:28:55.440 [2024-07-25 01:29:17.905394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.440 [2024-07-25 01:29:17.905409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.440 qpair failed and we were unable to recover it. 00:28:55.709 [2024-07-25 01:29:17.905822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-07-25 01:29:17.905838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-07-25 01:29:17.906271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-07-25 01:29:17.906287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-07-25 01:29:17.906683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-07-25 01:29:17.906698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-07-25 01:29:17.907113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-07-25 01:29:17.907128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-07-25 01:29:17.907613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-07-25 01:29:17.907629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-07-25 01:29:17.908035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-07-25 01:29:17.908061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-07-25 01:29:17.908508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-07-25 01:29:17.908523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-07-25 01:29:17.908872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-07-25 01:29:17.908887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-07-25 01:29:17.909363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-07-25 01:29:17.909379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-07-25 01:29:17.909796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-07-25 01:29:17.909811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-07-25 01:29:17.910208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-07-25 01:29:17.910223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-07-25 01:29:17.910557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-07-25 01:29:17.910572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-07-25 01:29:17.910997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-07-25 01:29:17.911012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-07-25 01:29:17.911528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-07-25 01:29:17.911543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-07-25 01:29:17.911699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-07-25 01:29:17.911714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-07-25 01:29:17.912180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-07-25 01:29:17.912195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-07-25 01:29:17.912631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-07-25 01:29:17.912646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-07-25 01:29:17.913298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-07-25 01:29:17.913314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-07-25 01:29:17.913714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-07-25 01:29:17.913729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-07-25 01:29:17.914145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-07-25 01:29:17.914160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.709 qpair failed and we were unable to recover it. 00:28:55.709 [2024-07-25 01:29:17.914564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.709 [2024-07-25 01:29:17.914578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-07-25 01:29:17.914989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-07-25 01:29:17.915004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-07-25 01:29:17.915484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-07-25 01:29:17.915499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-07-25 01:29:17.915983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-07-25 01:29:17.915999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-07-25 01:29:17.916465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-07-25 01:29:17.916480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-07-25 01:29:17.916985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-07-25 01:29:17.917000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-07-25 01:29:17.917147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-07-25 01:29:17.917162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-07-25 01:29:17.917623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-07-25 01:29:17.917638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-07-25 01:29:17.918052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-07-25 01:29:17.918066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-07-25 01:29:17.918474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-07-25 01:29:17.918489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-07-25 01:29:17.918977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-07-25 01:29:17.918992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-07-25 01:29:17.919455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-07-25 01:29:17.919471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-07-25 01:29:17.919870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-07-25 01:29:17.919885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-07-25 01:29:17.920352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-07-25 01:29:17.920368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-07-25 01:29:17.920792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-07-25 01:29:17.920807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-07-25 01:29:17.921280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-07-25 01:29:17.921295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-07-25 01:29:17.921756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-07-25 01:29:17.921771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-07-25 01:29:17.922186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-07-25 01:29:17.922201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-07-25 01:29:17.922555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-07-25 01:29:17.922570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-07-25 01:29:17.923054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-07-25 01:29:17.923070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-07-25 01:29:17.923430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-07-25 01:29:17.923445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-07-25 01:29:17.923861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-07-25 01:29:17.923876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-07-25 01:29:17.924367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-07-25 01:29:17.924383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-07-25 01:29:17.924805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-07-25 01:29:17.924820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-07-25 01:29:17.925230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-07-25 01:29:17.925246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.710 [2024-07-25 01:29:17.925645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.710 [2024-07-25 01:29:17.925663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.710 qpair failed and we were unable to recover it. 00:28:55.711 [2024-07-25 01:29:17.926147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-07-25 01:29:17.926163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-07-25 01:29:17.926328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-07-25 01:29:17.926343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-07-25 01:29:17.926745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-07-25 01:29:17.926760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-07-25 01:29:17.927145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-07-25 01:29:17.927160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-07-25 01:29:17.927504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-07-25 01:29:17.927519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-07-25 01:29:17.927856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-07-25 01:29:17.927871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-07-25 01:29:17.928274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-07-25 01:29:17.928289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-07-25 01:29:17.928704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-07-25 01:29:17.928720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-07-25 01:29:17.929121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-07-25 01:29:17.929136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-07-25 01:29:17.929542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-07-25 01:29:17.929557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-07-25 01:29:17.930051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-07-25 01:29:17.930067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-07-25 01:29:17.930476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-07-25 01:29:17.930491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-07-25 01:29:17.930967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-07-25 01:29:17.930982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-07-25 01:29:17.931431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-07-25 01:29:17.931447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-07-25 01:29:17.931936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-07-25 01:29:17.931951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-07-25 01:29:17.932432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-07-25 01:29:17.932448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-07-25 01:29:17.932945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-07-25 01:29:17.932960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-07-25 01:29:17.933362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-07-25 01:29:17.933376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-07-25 01:29:17.933870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-07-25 01:29:17.933885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-07-25 01:29:17.934347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-07-25 01:29:17.934363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-07-25 01:29:17.934800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-07-25 01:29:17.934815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-07-25 01:29:17.935301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-07-25 01:29:17.935316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-07-25 01:29:17.935769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-07-25 01:29:17.935784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-07-25 01:29:17.936208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-07-25 01:29:17.936225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.711 qpair failed and we were unable to recover it. 00:28:55.711 [2024-07-25 01:29:17.936598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.711 [2024-07-25 01:29:17.936613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-07-25 01:29:17.936953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-07-25 01:29:17.936984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-07-25 01:29:17.937466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-07-25 01:29:17.937498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-07-25 01:29:17.937997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-07-25 01:29:17.938028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-07-25 01:29:17.938560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-07-25 01:29:17.938592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-07-25 01:29:17.939139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-07-25 01:29:17.939172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-07-25 01:29:17.939668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-07-25 01:29:17.939700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-07-25 01:29:17.940250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-07-25 01:29:17.940282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-07-25 01:29:17.940809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-07-25 01:29:17.940841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-07-25 01:29:17.941321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-07-25 01:29:17.941353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-07-25 01:29:17.941872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-07-25 01:29:17.941903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-07-25 01:29:17.942402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-07-25 01:29:17.942434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-07-25 01:29:17.942909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-07-25 01:29:17.942941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-07-25 01:29:17.943457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-07-25 01:29:17.943490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-07-25 01:29:17.943990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-07-25 01:29:17.944022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-07-25 01:29:17.944531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-07-25 01:29:17.944568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-07-25 01:29:17.945071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-07-25 01:29:17.945103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-07-25 01:29:17.945542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-07-25 01:29:17.945572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-07-25 01:29:17.946073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-07-25 01:29:17.946106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-07-25 01:29:17.946627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-07-25 01:29:17.946658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-07-25 01:29:17.947063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-07-25 01:29:17.947096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-07-25 01:29:17.947544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-07-25 01:29:17.947575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-07-25 01:29:17.948120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-07-25 01:29:17.948153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-07-25 01:29:17.948652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-07-25 01:29:17.948683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-07-25 01:29:17.949156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-07-25 01:29:17.949189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.712 [2024-07-25 01:29:17.949576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.712 [2024-07-25 01:29:17.949607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.712 qpair failed and we were unable to recover it. 00:28:55.713 [2024-07-25 01:29:17.950060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-07-25 01:29:17.950093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-07-25 01:29:17.950613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-07-25 01:29:17.950644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-07-25 01:29:17.951083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-07-25 01:29:17.951115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-07-25 01:29:17.951568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-07-25 01:29:17.951599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-07-25 01:29:17.951985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-07-25 01:29:17.952017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-07-25 01:29:17.952544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-07-25 01:29:17.952576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-07-25 01:29:17.953083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-07-25 01:29:17.953115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-07-25 01:29:17.953591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-07-25 01:29:17.953622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-07-25 01:29:17.954074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-07-25 01:29:17.954106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-07-25 01:29:17.954513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-07-25 01:29:17.954544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-07-25 01:29:17.955000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-07-25 01:29:17.955030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-07-25 01:29:17.955479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-07-25 01:29:17.955511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-07-25 01:29:17.956019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-07-25 01:29:17.956059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-07-25 01:29:17.956529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-07-25 01:29:17.956560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-07-25 01:29:17.957079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-07-25 01:29:17.957111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-07-25 01:29:17.957559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-07-25 01:29:17.957590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-07-25 01:29:17.958039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-07-25 01:29:17.958080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-07-25 01:29:17.958464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-07-25 01:29:17.958496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-07-25 01:29:17.958932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-07-25 01:29:17.958963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-07-25 01:29:17.959480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-07-25 01:29:17.959512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-07-25 01:29:17.959901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-07-25 01:29:17.959932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-07-25 01:29:17.960425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-07-25 01:29:17.960479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-07-25 01:29:17.960737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-07-25 01:29:17.960768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-07-25 01:29:17.961218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-07-25 01:29:17.961250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-07-25 01:29:17.961773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-07-25 01:29:17.961804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-07-25 01:29:17.962325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-07-25 01:29:17.962358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-07-25 01:29:17.962807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-07-25 01:29:17.962838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-07-25 01:29:17.963297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-07-25 01:29:17.963329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-07-25 01:29:17.963779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-07-25 01:29:17.963810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-07-25 01:29:17.964316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-07-25 01:29:17.964354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-07-25 01:29:17.964872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-07-25 01:29:17.964903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.713 [2024-07-25 01:29:17.965448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.713 [2024-07-25 01:29:17.965480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.713 qpair failed and we were unable to recover it. 00:28:55.714 [2024-07-25 01:29:17.965872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-07-25 01:29:17.965903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-07-25 01:29:17.966401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-07-25 01:29:17.966433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-07-25 01:29:17.966795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-07-25 01:29:17.966825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-07-25 01:29:17.967333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-07-25 01:29:17.967364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-07-25 01:29:17.967812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-07-25 01:29:17.967842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-07-25 01:29:17.968361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-07-25 01:29:17.968405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-07-25 01:29:17.968741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-07-25 01:29:17.968756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-07-25 01:29:17.969176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-07-25 01:29:17.969208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-07-25 01:29:17.969673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-07-25 01:29:17.969705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-07-25 01:29:17.970247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-07-25 01:29:17.970280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-07-25 01:29:17.970754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-07-25 01:29:17.970785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-07-25 01:29:17.971233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-07-25 01:29:17.971249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-07-25 01:29:17.971681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-07-25 01:29:17.971711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-07-25 01:29:17.972211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-07-25 01:29:17.972243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-07-25 01:29:17.972691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-07-25 01:29:17.972722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-07-25 01:29:17.973197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-07-25 01:29:17.973212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-07-25 01:29:17.973676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-07-25 01:29:17.973690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-07-25 01:29:17.974036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-07-25 01:29:17.974056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-07-25 01:29:17.974469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-07-25 01:29:17.974484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-07-25 01:29:17.974915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-07-25 01:29:17.974947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-07-25 01:29:17.975385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-07-25 01:29:17.975421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-07-25 01:29:17.975944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-07-25 01:29:17.975976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-07-25 01:29:17.976495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-07-25 01:29:17.976528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-07-25 01:29:17.976970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-07-25 01:29:17.977001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-07-25 01:29:17.977492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-07-25 01:29:17.977525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-07-25 01:29:17.977994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-07-25 01:29:17.978026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-07-25 01:29:17.978500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-07-25 01:29:17.978532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-07-25 01:29:17.979031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-07-25 01:29:17.979075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-07-25 01:29:17.979605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-07-25 01:29:17.979637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-07-25 01:29:17.980040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-07-25 01:29:17.980083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-07-25 01:29:17.980543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-07-25 01:29:17.980558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-07-25 01:29:17.981024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-07-25 01:29:17.981065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-07-25 01:29:17.981587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-07-25 01:29:17.981619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-07-25 01:29:17.982136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.714 [2024-07-25 01:29:17.982169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.714 qpair failed and we were unable to recover it. 00:28:55.714 [2024-07-25 01:29:17.982639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-07-25 01:29:17.982670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-07-25 01:29:17.983198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-07-25 01:29:17.983230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-07-25 01:29:17.983619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-07-25 01:29:17.983650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-07-25 01:29:17.984128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-07-25 01:29:17.984166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-07-25 01:29:17.984675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-07-25 01:29:17.984690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-07-25 01:29:17.985083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-07-25 01:29:17.985098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-07-25 01:29:17.985585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-07-25 01:29:17.985616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-07-25 01:29:17.986138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-07-25 01:29:17.986170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-07-25 01:29:17.986550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-07-25 01:29:17.986581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-07-25 01:29:17.987090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-07-25 01:29:17.987122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-07-25 01:29:17.987645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-07-25 01:29:17.987676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-07-25 01:29:17.988220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-07-25 01:29:17.988252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-07-25 01:29:17.988779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-07-25 01:29:17.988809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-07-25 01:29:17.989203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-07-25 01:29:17.989234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-07-25 01:29:17.989759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-07-25 01:29:17.989791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-07-25 01:29:17.990231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-07-25 01:29:17.990247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-07-25 01:29:17.990656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-07-25 01:29:17.990687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-07-25 01:29:17.991142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-07-25 01:29:17.991175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-07-25 01:29:17.991640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-07-25 01:29:17.991671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-07-25 01:29:17.992117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-07-25 01:29:17.992150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-07-25 01:29:17.992669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-07-25 01:29:17.992700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-07-25 01:29:17.993221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-07-25 01:29:17.993253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-07-25 01:29:17.993755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-07-25 01:29:17.993785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-07-25 01:29:17.994229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-07-25 01:29:17.994262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-07-25 01:29:17.994763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-07-25 01:29:17.994794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-07-25 01:29:17.995238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-07-25 01:29:17.995270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-07-25 01:29:17.995689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-07-25 01:29:17.995720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-07-25 01:29:17.996212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-07-25 01:29:17.996227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-07-25 01:29:17.996444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-07-25 01:29:17.996475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.715 [2024-07-25 01:29:17.996925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.715 [2024-07-25 01:29:17.996956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.715 qpair failed and we were unable to recover it. 00:28:55.716 [2024-07-25 01:29:17.997462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-07-25 01:29:17.997495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-07-25 01:29:17.997950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-07-25 01:29:17.997980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-07-25 01:29:17.998428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-07-25 01:29:17.998460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-07-25 01:29:17.998926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-07-25 01:29:17.998958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-07-25 01:29:17.999408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-07-25 01:29:17.999441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-07-25 01:29:17.999942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-07-25 01:29:17.999973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-07-25 01:29:18.000491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-07-25 01:29:18.000523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-07-25 01:29:18.000898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-07-25 01:29:18.000927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-07-25 01:29:18.001436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-07-25 01:29:18.001481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-07-25 01:29:18.001948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-07-25 01:29:18.001979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-07-25 01:29:18.002488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-07-25 01:29:18.002520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-07-25 01:29:18.002974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-07-25 01:29:18.003006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-07-25 01:29:18.003562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-07-25 01:29:18.003595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-07-25 01:29:18.004056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-07-25 01:29:18.004094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-07-25 01:29:18.004613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-07-25 01:29:18.004645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-07-25 01:29:18.005171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-07-25 01:29:18.005204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-07-25 01:29:18.005721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-07-25 01:29:18.005752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-07-25 01:29:18.006296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-07-25 01:29:18.006328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-07-25 01:29:18.006779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-07-25 01:29:18.006810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-07-25 01:29:18.007255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-07-25 01:29:18.007288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-07-25 01:29:18.007685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-07-25 01:29:18.007699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-07-25 01:29:18.008061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-07-25 01:29:18.008076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-07-25 01:29:18.008486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-07-25 01:29:18.008517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-07-25 01:29:18.008948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-07-25 01:29:18.008979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-07-25 01:29:18.009429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-07-25 01:29:18.009461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-07-25 01:29:18.009930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-07-25 01:29:18.009961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-07-25 01:29:18.010346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-07-25 01:29:18.010379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-07-25 01:29:18.010831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-07-25 01:29:18.010863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-07-25 01:29:18.011297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-07-25 01:29:18.011330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-07-25 01:29:18.011865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-07-25 01:29:18.011895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-07-25 01:29:18.012332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-07-25 01:29:18.012363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-07-25 01:29:18.012804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-07-25 01:29:18.012835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-07-25 01:29:18.013287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-07-25 01:29:18.013319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-07-25 01:29:18.013775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-07-25 01:29:18.013806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-07-25 01:29:18.014347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-07-25 01:29:18.014379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-07-25 01:29:18.014903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-07-25 01:29:18.014934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-07-25 01:29:18.015391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.716 [2024-07-25 01:29:18.015423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.716 qpair failed and we were unable to recover it. 00:28:55.716 [2024-07-25 01:29:18.015876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-07-25 01:29:18.015907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.717 qpair failed and we were unable to recover it. 00:28:55.717 [2024-07-25 01:29:18.016352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-07-25 01:29:18.016384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.717 qpair failed and we were unable to recover it. 00:28:55.717 [2024-07-25 01:29:18.016872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-07-25 01:29:18.016911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.717 qpair failed and we were unable to recover it. 00:28:55.717 [2024-07-25 01:29:18.017438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-07-25 01:29:18.017470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.717 qpair failed and we were unable to recover it. 00:28:55.717 [2024-07-25 01:29:18.017946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-07-25 01:29:18.017978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.717 qpair failed and we were unable to recover it. 00:28:55.717 [2024-07-25 01:29:18.018477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-07-25 01:29:18.018509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.717 qpair failed and we were unable to recover it. 00:28:55.717 [2024-07-25 01:29:18.018960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-07-25 01:29:18.018990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.717 qpair failed and we were unable to recover it. 00:28:55.717 [2024-07-25 01:29:18.019515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-07-25 01:29:18.019548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.717 qpair failed and we were unable to recover it. 00:28:55.717 [2024-07-25 01:29:18.020014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-07-25 01:29:18.020053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.717 qpair failed and we were unable to recover it. 00:28:55.717 [2024-07-25 01:29:18.020552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-07-25 01:29:18.020583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.717 qpair failed and we were unable to recover it. 00:28:55.717 [2024-07-25 01:29:18.020761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-07-25 01:29:18.020776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.717 qpair failed and we were unable to recover it. 00:28:55.717 [2024-07-25 01:29:18.021249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-07-25 01:29:18.021282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.717 qpair failed and we were unable to recover it. 00:28:55.717 [2024-07-25 01:29:18.021816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-07-25 01:29:18.021847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.717 qpair failed and we were unable to recover it. 00:28:55.717 [2024-07-25 01:29:18.022288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-07-25 01:29:18.022320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.717 qpair failed and we were unable to recover it. 00:28:55.717 [2024-07-25 01:29:18.022780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-07-25 01:29:18.022810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.717 qpair failed and we were unable to recover it. 00:28:55.717 [2024-07-25 01:29:18.023322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-07-25 01:29:18.023354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.717 qpair failed and we were unable to recover it. 00:28:55.717 [2024-07-25 01:29:18.023870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-07-25 01:29:18.023906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.717 qpair failed and we were unable to recover it. 00:28:55.717 [2024-07-25 01:29:18.024454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-07-25 01:29:18.024486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.717 qpair failed and we were unable to recover it. 00:28:55.717 [2024-07-25 01:29:18.024935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-07-25 01:29:18.024966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.717 qpair failed and we were unable to recover it. 00:28:55.717 [2024-07-25 01:29:18.025488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-07-25 01:29:18.025520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.717 qpair failed and we were unable to recover it. 00:28:55.717 [2024-07-25 01:29:18.025970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-07-25 01:29:18.026001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.717 qpair failed and we were unable to recover it. 00:28:55.717 [2024-07-25 01:29:18.026543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-07-25 01:29:18.026576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.717 qpair failed and we were unable to recover it. 00:28:55.717 [2024-07-25 01:29:18.027034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-07-25 01:29:18.027078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.717 qpair failed and we were unable to recover it. 00:28:55.717 [2024-07-25 01:29:18.027578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-07-25 01:29:18.027610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.717 qpair failed and we were unable to recover it. 00:28:55.717 [2024-07-25 01:29:18.028057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-07-25 01:29:18.028089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.717 qpair failed and we were unable to recover it. 00:28:55.717 [2024-07-25 01:29:18.028539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-07-25 01:29:18.028570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.717 qpair failed and we were unable to recover it. 00:28:55.717 [2024-07-25 01:29:18.029083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-07-25 01:29:18.029115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.717 qpair failed and we were unable to recover it. 00:28:55.717 [2024-07-25 01:29:18.029659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-07-25 01:29:18.029690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.717 qpair failed and we were unable to recover it. 00:28:55.717 [2024-07-25 01:29:18.030140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-07-25 01:29:18.030172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.717 qpair failed and we were unable to recover it. 00:28:55.717 [2024-07-25 01:29:18.030696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-07-25 01:29:18.030727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.717 qpair failed and we were unable to recover it. 00:28:55.717 [2024-07-25 01:29:18.031232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-07-25 01:29:18.031264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.717 qpair failed and we were unable to recover it. 00:28:55.717 [2024-07-25 01:29:18.031807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-07-25 01:29:18.031822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.717 qpair failed and we were unable to recover it. 00:28:55.717 [2024-07-25 01:29:18.032315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-07-25 01:29:18.032347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.717 qpair failed and we were unable to recover it. 00:28:55.717 [2024-07-25 01:29:18.032740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-07-25 01:29:18.032770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.717 qpair failed and we were unable to recover it. 00:28:55.717 [2024-07-25 01:29:18.033238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-07-25 01:29:18.033271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.717 qpair failed and we were unable to recover it. 00:28:55.717 [2024-07-25 01:29:18.033793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-07-25 01:29:18.033824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.717 qpair failed and we were unable to recover it. 00:28:55.717 [2024-07-25 01:29:18.034323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-07-25 01:29:18.034356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.717 qpair failed and we were unable to recover it. 00:28:55.717 [2024-07-25 01:29:18.034876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.717 [2024-07-25 01:29:18.034907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.718 qpair failed and we were unable to recover it. 00:28:55.718 [2024-07-25 01:29:18.035352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.718 [2024-07-25 01:29:18.035385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.718 qpair failed and we were unable to recover it. 00:28:55.718 [2024-07-25 01:29:18.035769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.718 [2024-07-25 01:29:18.035800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.718 qpair failed and we were unable to recover it. 00:28:55.718 [2024-07-25 01:29:18.036318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.718 [2024-07-25 01:29:18.036350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.718 qpair failed and we were unable to recover it. 00:28:55.718 [2024-07-25 01:29:18.036871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.718 [2024-07-25 01:29:18.036901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.718 qpair failed and we were unable to recover it. 00:28:55.718 [2024-07-25 01:29:18.037289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.718 [2024-07-25 01:29:18.037321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.718 qpair failed and we were unable to recover it. 00:28:55.718 [2024-07-25 01:29:18.037845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.718 [2024-07-25 01:29:18.037877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.718 qpair failed and we were unable to recover it. 00:28:55.718 [2024-07-25 01:29:18.038321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.718 [2024-07-25 01:29:18.038354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.718 qpair failed and we were unable to recover it. 00:28:55.718 [2024-07-25 01:29:18.038875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.718 [2024-07-25 01:29:18.038905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.718 qpair failed and we were unable to recover it. 00:28:55.718 [2024-07-25 01:29:18.039354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.718 [2024-07-25 01:29:18.039386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.718 qpair failed and we were unable to recover it. 00:28:55.718 [2024-07-25 01:29:18.039931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.718 [2024-07-25 01:29:18.039961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.718 qpair failed and we were unable to recover it. 00:28:55.718 [2024-07-25 01:29:18.040408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.718 [2024-07-25 01:29:18.040440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.718 qpair failed and we were unable to recover it. 00:28:55.718 [2024-07-25 01:29:18.040963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.718 [2024-07-25 01:29:18.040994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.718 qpair failed and we were unable to recover it. 00:28:55.718 [2024-07-25 01:29:18.041500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.718 [2024-07-25 01:29:18.041533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.718 qpair failed and we were unable to recover it. 00:28:55.718 [2024-07-25 01:29:18.042003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.718 [2024-07-25 01:29:18.042034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.718 qpair failed and we were unable to recover it. 00:28:55.718 [2024-07-25 01:29:18.042525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.718 [2024-07-25 01:29:18.042557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.718 qpair failed and we were unable to recover it. 00:28:55.718 [2024-07-25 01:29:18.043065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.718 [2024-07-25 01:29:18.043097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.718 qpair failed and we were unable to recover it. 00:28:55.718 [2024-07-25 01:29:18.043615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.718 [2024-07-25 01:29:18.043647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.718 qpair failed and we were unable to recover it. 00:28:55.718 [2024-07-25 01:29:18.044107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.718 [2024-07-25 01:29:18.044140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.718 qpair failed and we were unable to recover it. 00:28:55.718 [2024-07-25 01:29:18.044674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.718 [2024-07-25 01:29:18.044711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.718 qpair failed and we were unable to recover it. 00:28:55.718 [2024-07-25 01:29:18.045208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.718 [2024-07-25 01:29:18.045240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.718 qpair failed and we were unable to recover it. 00:28:55.718 [2024-07-25 01:29:18.045615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.718 [2024-07-25 01:29:18.045647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.718 qpair failed and we were unable to recover it. 00:28:55.718 [2024-07-25 01:29:18.046086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.718 [2024-07-25 01:29:18.046118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.718 qpair failed and we were unable to recover it. 00:28:55.718 [2024-07-25 01:29:18.046564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.718 [2024-07-25 01:29:18.046596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.718 qpair failed and we were unable to recover it. 00:28:55.718 [2024-07-25 01:29:18.047039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.718 [2024-07-25 01:29:18.047079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.718 qpair failed and we were unable to recover it. 00:28:55.718 [2024-07-25 01:29:18.047587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.718 [2024-07-25 01:29:18.047619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.718 qpair failed and we were unable to recover it. 00:28:55.718 [2024-07-25 01:29:18.048116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.718 [2024-07-25 01:29:18.048149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.718 qpair failed and we were unable to recover it. 00:28:55.718 [2024-07-25 01:29:18.048628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.718 [2024-07-25 01:29:18.048660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.718 qpair failed and we were unable to recover it. 00:28:55.718 [2024-07-25 01:29:18.049182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.718 [2024-07-25 01:29:18.049214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.718 qpair failed and we were unable to recover it. 00:28:55.718 [2024-07-25 01:29:18.049711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.718 [2024-07-25 01:29:18.049742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.718 qpair failed and we were unable to recover it. 00:28:55.718 [2024-07-25 01:29:18.050218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.718 [2024-07-25 01:29:18.050250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.718 qpair failed and we were unable to recover it. 00:28:55.718 [2024-07-25 01:29:18.050642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.718 [2024-07-25 01:29:18.050673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.718 qpair failed and we were unable to recover it. 00:28:55.718 [2024-07-25 01:29:18.051125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.718 [2024-07-25 01:29:18.051157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.718 qpair failed and we were unable to recover it. 00:28:55.718 [2024-07-25 01:29:18.051711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.718 [2024-07-25 01:29:18.051743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.718 qpair failed and we were unable to recover it. 00:28:55.718 [2024-07-25 01:29:18.052215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.718 [2024-07-25 01:29:18.052248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.718 qpair failed and we were unable to recover it. 00:28:55.718 [2024-07-25 01:29:18.052634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.718 [2024-07-25 01:29:18.052666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.718 qpair failed and we were unable to recover it. 00:28:55.718 [2024-07-25 01:29:18.053056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.718 [2024-07-25 01:29:18.053088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.718 qpair failed and we were unable to recover it. 00:28:55.718 [2024-07-25 01:29:18.053588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.718 [2024-07-25 01:29:18.053619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.718 qpair failed and we were unable to recover it. 00:28:55.719 [2024-07-25 01:29:18.054139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.719 [2024-07-25 01:29:18.054171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.719 qpair failed and we were unable to recover it. 00:28:55.719 [2024-07-25 01:29:18.054565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.719 [2024-07-25 01:29:18.054596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.719 qpair failed and we were unable to recover it. 00:28:55.719 [2024-07-25 01:29:18.055099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.719 [2024-07-25 01:29:18.055131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.719 qpair failed and we were unable to recover it. 00:28:55.719 [2024-07-25 01:29:18.055589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.719 [2024-07-25 01:29:18.055620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.719 qpair failed and we were unable to recover it. 00:28:55.719 [2024-07-25 01:29:18.056015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.719 [2024-07-25 01:29:18.056058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.719 qpair failed and we were unable to recover it. 00:28:55.719 [2024-07-25 01:29:18.056559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.719 [2024-07-25 01:29:18.056599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.719 qpair failed and we were unable to recover it. 00:28:55.719 [2024-07-25 01:29:18.057096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.719 [2024-07-25 01:29:18.057128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.719 qpair failed and we were unable to recover it. 00:28:55.719 [2024-07-25 01:29:18.057577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.719 [2024-07-25 01:29:18.057608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.719 qpair failed and we were unable to recover it. 00:28:55.719 [2024-07-25 01:29:18.058065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.719 [2024-07-25 01:29:18.058098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.719 qpair failed and we were unable to recover it. 00:28:55.719 [2024-07-25 01:29:18.058530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.719 [2024-07-25 01:29:18.058560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.719 qpair failed and we were unable to recover it. 00:28:55.719 [2024-07-25 01:29:18.059079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.719 [2024-07-25 01:29:18.059094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.719 qpair failed and we were unable to recover it. 00:28:55.719 [2024-07-25 01:29:18.059253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.719 [2024-07-25 01:29:18.059268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.719 qpair failed and we were unable to recover it. 00:28:55.719 [2024-07-25 01:29:18.059733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.719 [2024-07-25 01:29:18.059764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.719 qpair failed and we were unable to recover it. 00:28:55.719 [2024-07-25 01:29:18.060271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.719 [2024-07-25 01:29:18.060304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.719 qpair failed and we were unable to recover it. 00:28:55.719 [2024-07-25 01:29:18.060823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.719 [2024-07-25 01:29:18.060855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.719 qpair failed and we were unable to recover it. 00:28:55.719 [2024-07-25 01:29:18.061304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.719 [2024-07-25 01:29:18.061337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.719 qpair failed and we were unable to recover it. 00:28:55.719 [2024-07-25 01:29:18.061786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.719 [2024-07-25 01:29:18.061817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.719 qpair failed and we were unable to recover it. 00:28:55.719 [2024-07-25 01:29:18.062227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.719 [2024-07-25 01:29:18.062260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.719 qpair failed and we were unable to recover it. 00:28:55.719 [2024-07-25 01:29:18.062664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.719 [2024-07-25 01:29:18.062694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.719 qpair failed and we were unable to recover it. 00:28:55.719 [2024-07-25 01:29:18.063097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.719 [2024-07-25 01:29:18.063129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.719 qpair failed and we were unable to recover it. 00:28:55.719 [2024-07-25 01:29:18.063594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.719 [2024-07-25 01:29:18.063625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.719 qpair failed and we were unable to recover it. 00:28:55.719 [2024-07-25 01:29:18.064070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.719 [2024-07-25 01:29:18.064107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.719 qpair failed and we were unable to recover it. 00:28:55.719 [2024-07-25 01:29:18.064638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.719 [2024-07-25 01:29:18.064671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.719 qpair failed and we were unable to recover it. 00:28:55.719 [2024-07-25 01:29:18.065180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.719 [2024-07-25 01:29:18.065213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.719 qpair failed and we were unable to recover it. 00:28:55.719 [2024-07-25 01:29:18.065728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.719 [2024-07-25 01:29:18.065758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.719 qpair failed and we were unable to recover it. 00:28:55.719 [2024-07-25 01:29:18.066150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.719 [2024-07-25 01:29:18.066183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.719 qpair failed and we were unable to recover it. 00:28:55.719 [2024-07-25 01:29:18.066630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.719 [2024-07-25 01:29:18.066661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.719 qpair failed and we were unable to recover it. 00:28:55.719 [2024-07-25 01:29:18.067159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.719 [2024-07-25 01:29:18.067191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.719 qpair failed and we were unable to recover it. 00:28:55.719 [2024-07-25 01:29:18.067652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.719 [2024-07-25 01:29:18.067683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.719 qpair failed and we were unable to recover it. 00:28:55.719 [2024-07-25 01:29:18.068115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.719 [2024-07-25 01:29:18.068130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.719 qpair failed and we were unable to recover it. 00:28:55.719 [2024-07-25 01:29:18.068596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.719 [2024-07-25 01:29:18.068627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.719 qpair failed and we were unable to recover it. 00:28:55.719 [2024-07-25 01:29:18.069146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.719 [2024-07-25 01:29:18.069178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.719 qpair failed and we were unable to recover it. 00:28:55.719 [2024-07-25 01:29:18.069639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.719 [2024-07-25 01:29:18.069670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.719 qpair failed and we were unable to recover it. 00:28:55.719 [2024-07-25 01:29:18.070130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.719 [2024-07-25 01:29:18.070162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.720 qpair failed and we were unable to recover it. 00:28:55.720 [2024-07-25 01:29:18.070606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.720 [2024-07-25 01:29:18.070638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.720 qpair failed and we were unable to recover it. 00:28:55.720 [2024-07-25 01:29:18.071092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.720 [2024-07-25 01:29:18.071125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.720 qpair failed and we were unable to recover it. 00:28:55.720 [2024-07-25 01:29:18.071504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.720 [2024-07-25 01:29:18.071535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.720 qpair failed and we were unable to recover it. 00:28:55.720 [2024-07-25 01:29:18.071975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.720 [2024-07-25 01:29:18.072006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.720 qpair failed and we were unable to recover it. 00:28:55.720 [2024-07-25 01:29:18.072476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.720 [2024-07-25 01:29:18.072508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.720 qpair failed and we were unable to recover it. 00:28:55.720 [2024-07-25 01:29:18.073029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.720 [2024-07-25 01:29:18.073077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.720 qpair failed and we were unable to recover it. 00:28:55.720 [2024-07-25 01:29:18.073577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.720 [2024-07-25 01:29:18.073608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.720 qpair failed and we were unable to recover it. 00:28:55.720 [2024-07-25 01:29:18.074151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.720 [2024-07-25 01:29:18.074184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.720 qpair failed and we were unable to recover it. 00:28:55.720 [2024-07-25 01:29:18.074631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.720 [2024-07-25 01:29:18.074661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.720 qpair failed and we were unable to recover it. 00:28:55.720 [2024-07-25 01:29:18.075165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.720 [2024-07-25 01:29:18.075197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.720 qpair failed and we were unable to recover it. 00:28:55.720 [2024-07-25 01:29:18.075586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.720 [2024-07-25 01:29:18.075617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.720 qpair failed and we were unable to recover it. 00:28:55.720 [2024-07-25 01:29:18.076135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.720 [2024-07-25 01:29:18.076167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.720 qpair failed and we were unable to recover it. 00:28:55.720 [2024-07-25 01:29:18.076599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.720 [2024-07-25 01:29:18.076630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.720 qpair failed and we were unable to recover it. 00:28:55.720 [2024-07-25 01:29:18.077150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.720 [2024-07-25 01:29:18.077182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.720 qpair failed and we were unable to recover it. 00:28:55.720 [2024-07-25 01:29:18.077640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.720 [2024-07-25 01:29:18.077672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.720 qpair failed and we were unable to recover it. 00:28:55.720 [2024-07-25 01:29:18.078123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.720 [2024-07-25 01:29:18.078155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.720 qpair failed and we were unable to recover it. 00:28:55.720 [2024-07-25 01:29:18.078673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.720 [2024-07-25 01:29:18.078704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.720 qpair failed and we were unable to recover it. 00:28:55.720 [2024-07-25 01:29:18.079152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.720 [2024-07-25 01:29:18.079184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.720 qpair failed and we were unable to recover it. 00:28:55.720 [2024-07-25 01:29:18.079613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.720 [2024-07-25 01:29:18.079644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.720 qpair failed and we were unable to recover it. 00:28:55.720 [2024-07-25 01:29:18.080092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.720 [2024-07-25 01:29:18.080128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.720 qpair failed and we were unable to recover it. 00:28:55.720 [2024-07-25 01:29:18.080567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.720 [2024-07-25 01:29:18.080598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.720 qpair failed and we were unable to recover it. 00:28:55.720 [2024-07-25 01:29:18.081076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.720 [2024-07-25 01:29:18.081110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.720 qpair failed and we were unable to recover it. 00:28:55.720 [2024-07-25 01:29:18.081631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.720 [2024-07-25 01:29:18.081662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.720 qpair failed and we were unable to recover it. 00:28:55.720 [2024-07-25 01:29:18.082185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.720 [2024-07-25 01:29:18.082218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.720 qpair failed and we were unable to recover it. 00:28:55.720 [2024-07-25 01:29:18.082682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.720 [2024-07-25 01:29:18.082713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.720 qpair failed and we were unable to recover it. 00:28:55.720 [2024-07-25 01:29:18.083175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.720 [2024-07-25 01:29:18.083207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.720 qpair failed and we were unable to recover it. 00:28:55.720 [2024-07-25 01:29:18.083714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.720 [2024-07-25 01:29:18.083746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.720 qpair failed and we were unable to recover it. 00:28:55.720 [2024-07-25 01:29:18.084189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.720 [2024-07-25 01:29:18.084226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.720 qpair failed and we were unable to recover it. 00:28:55.720 [2024-07-25 01:29:18.084605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.720 [2024-07-25 01:29:18.084636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.720 qpair failed and we were unable to recover it. 00:28:55.720 [2024-07-25 01:29:18.085008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.720 [2024-07-25 01:29:18.085039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.720 qpair failed and we were unable to recover it. 00:28:55.720 [2024-07-25 01:29:18.085488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.720 [2024-07-25 01:29:18.085519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.720 qpair failed and we were unable to recover it. 00:28:55.720 [2024-07-25 01:29:18.086053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.720 [2024-07-25 01:29:18.086085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.720 qpair failed and we were unable to recover it. 00:28:55.720 [2024-07-25 01:29:18.086602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.720 [2024-07-25 01:29:18.086634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.720 qpair failed and we were unable to recover it. 00:28:55.720 [2024-07-25 01:29:18.087161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.720 [2024-07-25 01:29:18.087193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.720 qpair failed and we were unable to recover it. 00:28:55.720 [2024-07-25 01:29:18.087634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.720 [2024-07-25 01:29:18.087665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.720 qpair failed and we were unable to recover it. 00:28:55.720 [2024-07-25 01:29:18.088210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.721 [2024-07-25 01:29:18.088242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.721 qpair failed and we were unable to recover it. 00:28:55.721 [2024-07-25 01:29:18.088769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.721 [2024-07-25 01:29:18.088799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.721 qpair failed and we were unable to recover it. 00:28:55.721 [2024-07-25 01:29:18.089254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.721 [2024-07-25 01:29:18.089286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.721 qpair failed and we were unable to recover it. 00:28:55.721 [2024-07-25 01:29:18.089817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.721 [2024-07-25 01:29:18.089848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.721 qpair failed and we were unable to recover it. 00:28:55.721 [2024-07-25 01:29:18.090316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.721 [2024-07-25 01:29:18.090348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.721 qpair failed and we were unable to recover it. 00:28:55.721 [2024-07-25 01:29:18.090816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.721 [2024-07-25 01:29:18.090846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.721 qpair failed and we were unable to recover it. 00:28:55.721 [2024-07-25 01:29:18.091369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.721 [2024-07-25 01:29:18.091407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.721 qpair failed and we were unable to recover it. 00:28:55.721 [2024-07-25 01:29:18.091894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.721 [2024-07-25 01:29:18.091925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.721 qpair failed and we were unable to recover it. 00:28:55.721 [2024-07-25 01:29:18.092363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.721 [2024-07-25 01:29:18.092395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.721 qpair failed and we were unable to recover it. 00:28:55.721 [2024-07-25 01:29:18.092833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.721 [2024-07-25 01:29:18.092863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.721 qpair failed and we were unable to recover it. 00:28:55.721 [2024-07-25 01:29:18.093349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.721 [2024-07-25 01:29:18.093365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.721 qpair failed and we were unable to recover it. 00:28:55.721 [2024-07-25 01:29:18.093773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.721 [2024-07-25 01:29:18.093804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.721 qpair failed and we were unable to recover it. 00:28:55.721 [2024-07-25 01:29:18.094253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.721 [2024-07-25 01:29:18.094285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.721 qpair failed and we were unable to recover it. 00:28:55.721 [2024-07-25 01:29:18.094804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.721 [2024-07-25 01:29:18.094835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.721 qpair failed and we were unable to recover it. 00:28:55.721 [2024-07-25 01:29:18.095294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.721 [2024-07-25 01:29:18.095326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.721 qpair failed and we were unable to recover it. 00:28:55.721 [2024-07-25 01:29:18.095864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.721 [2024-07-25 01:29:18.095894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.721 qpair failed and we were unable to recover it. 00:28:55.721 [2024-07-25 01:29:18.096353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.721 [2024-07-25 01:29:18.096385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.721 qpair failed and we were unable to recover it. 00:28:55.721 [2024-07-25 01:29:18.096827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.721 [2024-07-25 01:29:18.096857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.721 qpair failed and we were unable to recover it. 00:28:55.721 [2024-07-25 01:29:18.097320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.721 [2024-07-25 01:29:18.097352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.721 qpair failed and we were unable to recover it. 00:28:55.721 [2024-07-25 01:29:18.097807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.721 [2024-07-25 01:29:18.097838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.721 qpair failed and we were unable to recover it. 00:28:55.721 [2024-07-25 01:29:18.098340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.721 [2024-07-25 01:29:18.098372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.721 qpair failed and we were unable to recover it. 00:28:55.721 [2024-07-25 01:29:18.098869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.721 [2024-07-25 01:29:18.098899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.721 qpair failed and we were unable to recover it. 00:28:55.721 [2024-07-25 01:29:18.099237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.721 [2024-07-25 01:29:18.099270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.721 qpair failed and we were unable to recover it. 00:28:55.721 [2024-07-25 01:29:18.099742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.721 [2024-07-25 01:29:18.099773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.721 qpair failed and we were unable to recover it. 00:28:55.721 [2024-07-25 01:29:18.100290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.721 [2024-07-25 01:29:18.100322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.721 qpair failed and we were unable to recover it. 00:28:55.721 [2024-07-25 01:29:18.100761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.721 [2024-07-25 01:29:18.100776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.721 qpair failed and we were unable to recover it. 00:28:55.721 [2024-07-25 01:29:18.101240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.721 [2024-07-25 01:29:18.101256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.721 qpair failed and we were unable to recover it. 00:28:55.721 [2024-07-25 01:29:18.101741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.721 [2024-07-25 01:29:18.101756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.721 qpair failed and we were unable to recover it. 00:28:55.721 [2024-07-25 01:29:18.102225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.721 [2024-07-25 01:29:18.102257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.721 qpair failed and we were unable to recover it. 00:28:55.721 [2024-07-25 01:29:18.102776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.721 [2024-07-25 01:29:18.102807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.721 qpair failed and we were unable to recover it. 00:28:55.721 [2024-07-25 01:29:18.103255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.721 [2024-07-25 01:29:18.103287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.721 qpair failed and we were unable to recover it. 00:28:55.721 [2024-07-25 01:29:18.103805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.721 [2024-07-25 01:29:18.103836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.721 qpair failed and we were unable to recover it. 00:28:55.721 [2024-07-25 01:29:18.104287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.721 [2024-07-25 01:29:18.104324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.721 qpair failed and we were unable to recover it. 00:28:55.721 [2024-07-25 01:29:18.104789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.721 [2024-07-25 01:29:18.104804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.721 qpair failed and we were unable to recover it. 00:28:55.721 [2024-07-25 01:29:18.105152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.721 [2024-07-25 01:29:18.105184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.721 qpair failed and we were unable to recover it. 00:28:55.721 [2024-07-25 01:29:18.105617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.721 [2024-07-25 01:29:18.105648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.721 qpair failed and we were unable to recover it. 00:28:55.721 [2024-07-25 01:29:18.106088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.721 [2024-07-25 01:29:18.106121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.721 qpair failed and we were unable to recover it. 00:28:55.721 [2024-07-25 01:29:18.106618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.721 [2024-07-25 01:29:18.106647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.722 qpair failed and we were unable to recover it. 00:28:55.722 [2024-07-25 01:29:18.107093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.722 [2024-07-25 01:29:18.107126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.722 qpair failed and we were unable to recover it. 00:28:55.722 [2024-07-25 01:29:18.107595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.722 [2024-07-25 01:29:18.107626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.722 qpair failed and we were unable to recover it. 00:28:55.722 [2024-07-25 01:29:18.108125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.722 [2024-07-25 01:29:18.108157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.722 qpair failed and we were unable to recover it. 00:28:55.722 [2024-07-25 01:29:18.108598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.722 [2024-07-25 01:29:18.108629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.722 qpair failed and we were unable to recover it. 00:28:55.722 [2024-07-25 01:29:18.109065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.722 [2024-07-25 01:29:18.109107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.722 qpair failed and we were unable to recover it. 00:28:55.722 [2024-07-25 01:29:18.109567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.722 [2024-07-25 01:29:18.109581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.722 qpair failed and we were unable to recover it. 00:28:55.722 [2024-07-25 01:29:18.109990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.722 [2024-07-25 01:29:18.110005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.722 qpair failed and we were unable to recover it. 00:28:55.722 [2024-07-25 01:29:18.110431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.722 [2024-07-25 01:29:18.110464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.722 qpair failed and we were unable to recover it. 00:28:55.722 [2024-07-25 01:29:18.110849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.722 [2024-07-25 01:29:18.110881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.722 qpair failed and we were unable to recover it. 00:28:55.722 [2024-07-25 01:29:18.111348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.722 [2024-07-25 01:29:18.111381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.722 qpair failed and we were unable to recover it. 00:28:55.722 [2024-07-25 01:29:18.111827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.722 [2024-07-25 01:29:18.111858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.722 qpair failed and we were unable to recover it. 00:28:55.722 [2024-07-25 01:29:18.112063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.722 [2024-07-25 01:29:18.112095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.722 qpair failed and we were unable to recover it. 00:28:55.722 [2024-07-25 01:29:18.112494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.722 [2024-07-25 01:29:18.112525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.722 qpair failed and we were unable to recover it. 00:28:55.722 [2024-07-25 01:29:18.113076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.722 [2024-07-25 01:29:18.113108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.722 qpair failed and we were unable to recover it. 00:28:55.722 [2024-07-25 01:29:18.113622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.722 [2024-07-25 01:29:18.113655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.722 qpair failed and we were unable to recover it. 00:28:55.722 [2024-07-25 01:29:18.114178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.722 [2024-07-25 01:29:18.114211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.722 qpair failed and we were unable to recover it. 00:28:55.722 [2024-07-25 01:29:18.114710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.722 [2024-07-25 01:29:18.114741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.722 qpair failed and we were unable to recover it. 00:28:55.722 [2024-07-25 01:29:18.115240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.722 [2024-07-25 01:29:18.115273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.722 qpair failed and we were unable to recover it. 00:28:55.722 [2024-07-25 01:29:18.115705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.722 [2024-07-25 01:29:18.115736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.722 qpair failed and we were unable to recover it. 00:28:55.722 [2024-07-25 01:29:18.115928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.722 [2024-07-25 01:29:18.115959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.722 qpair failed and we were unable to recover it. 00:28:55.722 [2024-07-25 01:29:18.116411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.722 [2024-07-25 01:29:18.116442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.722 qpair failed and we were unable to recover it. 00:28:55.722 [2024-07-25 01:29:18.116961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.722 [2024-07-25 01:29:18.116996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.722 qpair failed and we were unable to recover it. 00:28:55.722 [2024-07-25 01:29:18.117529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.722 [2024-07-25 01:29:18.117561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.722 qpair failed and we were unable to recover it. 00:28:55.722 [2024-07-25 01:29:18.118007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.722 [2024-07-25 01:29:18.118038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.722 qpair failed and we were unable to recover it. 00:28:55.722 [2024-07-25 01:29:18.118551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.722 [2024-07-25 01:29:18.118583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.722 qpair failed and we were unable to recover it. 00:28:55.722 [2024-07-25 01:29:18.119025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.722 [2024-07-25 01:29:18.119069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.722 qpair failed and we were unable to recover it. 00:28:55.722 [2024-07-25 01:29:18.119522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.722 [2024-07-25 01:29:18.119552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.722 qpair failed and we were unable to recover it. 00:28:55.722 [2024-07-25 01:29:18.120063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.722 [2024-07-25 01:29:18.120096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.722 qpair failed and we were unable to recover it. 00:28:55.722 [2024-07-25 01:29:18.120541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.722 [2024-07-25 01:29:18.120556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.722 qpair failed and we were unable to recover it. 00:28:55.722 [2024-07-25 01:29:18.120961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.722 [2024-07-25 01:29:18.120976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.722 qpair failed and we were unable to recover it. 00:28:55.722 [2024-07-25 01:29:18.121386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.722 [2024-07-25 01:29:18.121402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.722 qpair failed and we were unable to recover it. 00:28:55.722 [2024-07-25 01:29:18.121880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.722 [2024-07-25 01:29:18.121912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.722 qpair failed and we were unable to recover it. 00:28:55.722 [2024-07-25 01:29:18.123108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.722 [2024-07-25 01:29:18.123134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.722 qpair failed and we were unable to recover it. 00:28:55.722 [2024-07-25 01:29:18.123583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.722 [2024-07-25 01:29:18.123599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.722 qpair failed and we were unable to recover it. 00:28:55.722 [2024-07-25 01:29:18.124011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.722 [2024-07-25 01:29:18.124026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.722 qpair failed and we were unable to recover it. 00:28:55.722 [2024-07-25 01:29:18.124512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.722 [2024-07-25 01:29:18.124544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.722 qpair failed and we were unable to recover it. 00:28:55.722 [2024-07-25 01:29:18.125012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.722 [2024-07-25 01:29:18.125027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.722 qpair failed and we were unable to recover it. 00:28:55.723 [2024-07-25 01:29:18.125261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.723 [2024-07-25 01:29:18.125277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.723 qpair failed and we were unable to recover it. 00:28:55.723 [2024-07-25 01:29:18.125684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.723 [2024-07-25 01:29:18.125699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.723 qpair failed and we were unable to recover it. 00:28:55.723 [2024-07-25 01:29:18.126114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.723 [2024-07-25 01:29:18.126130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.723 qpair failed and we were unable to recover it. 00:28:55.723 [2024-07-25 01:29:18.126523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.723 [2024-07-25 01:29:18.126537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.723 qpair failed and we were unable to recover it. 00:28:55.723 [2024-07-25 01:29:18.126959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.723 [2024-07-25 01:29:18.126990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.723 qpair failed and we were unable to recover it. 00:28:55.723 [2024-07-25 01:29:18.127509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.723 [2024-07-25 01:29:18.127541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.723 qpair failed and we were unable to recover it. 00:28:55.723 [2024-07-25 01:29:18.127986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.723 [2024-07-25 01:29:18.128017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.723 qpair failed and we were unable to recover it. 00:28:55.723 [2024-07-25 01:29:18.128494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.723 [2024-07-25 01:29:18.128529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.723 qpair failed and we were unable to recover it. 00:28:55.723 [2024-07-25 01:29:18.129021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.723 [2024-07-25 01:29:18.129073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.723 qpair failed and we were unable to recover it. 00:28:55.723 [2024-07-25 01:29:18.129356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.723 [2024-07-25 01:29:18.129388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.723 qpair failed and we were unable to recover it. 00:28:55.723 [2024-07-25 01:29:18.129837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.723 [2024-07-25 01:29:18.129867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.723 qpair failed and we were unable to recover it. 00:28:55.723 [2024-07-25 01:29:18.130367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.723 [2024-07-25 01:29:18.130383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.723 qpair failed and we were unable to recover it. 00:28:55.723 [2024-07-25 01:29:18.130720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.723 [2024-07-25 01:29:18.130766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.723 qpair failed and we were unable to recover it. 00:28:55.723 [2024-07-25 01:29:18.131215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.723 [2024-07-25 01:29:18.131247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.723 qpair failed and we were unable to recover it. 00:28:55.723 [2024-07-25 01:29:18.131690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.723 [2024-07-25 01:29:18.131720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.723 qpair failed and we were unable to recover it. 00:28:55.723 [2024-07-25 01:29:18.132165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.723 [2024-07-25 01:29:18.132180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.723 qpair failed and we were unable to recover it. 00:28:55.723 [2024-07-25 01:29:18.132591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.723 [2024-07-25 01:29:18.132607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.723 qpair failed and we were unable to recover it. 00:28:55.723 [2024-07-25 01:29:18.133007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.723 [2024-07-25 01:29:18.133022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.723 qpair failed and we were unable to recover it. 00:28:55.723 [2024-07-25 01:29:18.133500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.723 [2024-07-25 01:29:18.133532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.723 qpair failed and we were unable to recover it. 00:28:55.723 [2024-07-25 01:29:18.133781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.723 [2024-07-25 01:29:18.133796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.723 qpair failed and we were unable to recover it. 00:28:55.723 [2024-07-25 01:29:18.134220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.723 [2024-07-25 01:29:18.134236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.723 qpair failed and we were unable to recover it. 00:28:55.723 [2024-07-25 01:29:18.134639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.723 [2024-07-25 01:29:18.134669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.723 qpair failed and we were unable to recover it. 00:28:55.723 [2024-07-25 01:29:18.135061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.723 [2024-07-25 01:29:18.135093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.723 qpair failed and we were unable to recover it. 00:28:55.723 [2024-07-25 01:29:18.135467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.723 [2024-07-25 01:29:18.135498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.723 qpair failed and we were unable to recover it. 00:28:55.723 [2024-07-25 01:29:18.135880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.723 [2024-07-25 01:29:18.135924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.723 qpair failed and we were unable to recover it. 00:28:55.723 [2024-07-25 01:29:18.136323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.723 [2024-07-25 01:29:18.136338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.723 qpair failed and we were unable to recover it. 00:28:55.723 [2024-07-25 01:29:18.136847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.723 [2024-07-25 01:29:18.136861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.723 qpair failed and we were unable to recover it. 00:28:55.723 [2024-07-25 01:29:18.137300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.723 [2024-07-25 01:29:18.137332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.723 qpair failed and we were unable to recover it. 00:28:55.723 [2024-07-25 01:29:18.137714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.723 [2024-07-25 01:29:18.137729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.723 qpair failed and we were unable to recover it. 00:28:55.723 [2024-07-25 01:29:18.138096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.723 [2024-07-25 01:29:18.138129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.723 qpair failed and we were unable to recover it. 00:28:55.723 [2024-07-25 01:29:18.138663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.723 [2024-07-25 01:29:18.138693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.723 qpair failed and we were unable to recover it. 00:28:55.723 [2024-07-25 01:29:18.139065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.723 [2024-07-25 01:29:18.139081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.723 qpair failed and we were unable to recover it. 00:28:55.723 [2024-07-25 01:29:18.139487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.723 [2024-07-25 01:29:18.139502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.723 qpair failed and we were unable to recover it. 00:28:55.723 [2024-07-25 01:29:18.139920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.723 [2024-07-25 01:29:18.139951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.723 qpair failed and we were unable to recover it. 00:28:55.723 [2024-07-25 01:29:18.140492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.723 [2024-07-25 01:29:18.140534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.723 qpair failed and we were unable to recover it. 00:28:55.723 [2024-07-25 01:29:18.140782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.723 [2024-07-25 01:29:18.140797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.723 qpair failed and we were unable to recover it. 00:28:55.723 [2024-07-25 01:29:18.141285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.723 [2024-07-25 01:29:18.141300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.723 qpair failed and we were unable to recover it. 00:28:55.723 [2024-07-25 01:29:18.141663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.724 [2024-07-25 01:29:18.141693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.724 qpair failed and we were unable to recover it. 00:28:55.724 [2024-07-25 01:29:18.142195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.724 [2024-07-25 01:29:18.142228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.724 qpair failed and we were unable to recover it. 00:28:55.724 [2024-07-25 01:29:18.142688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.724 [2024-07-25 01:29:18.142718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.724 qpair failed and we were unable to recover it. 00:28:55.724 [2024-07-25 01:29:18.143097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.724 [2024-07-25 01:29:18.143130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.724 qpair failed and we were unable to recover it. 00:28:55.724 [2024-07-25 01:29:18.143581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.724 [2024-07-25 01:29:18.143612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.724 qpair failed and we were unable to recover it. 00:28:55.724 [2024-07-25 01:29:18.144064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.724 [2024-07-25 01:29:18.144096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.724 qpair failed and we were unable to recover it. 00:28:55.724 [2024-07-25 01:29:18.144554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.724 [2024-07-25 01:29:18.144586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.724 qpair failed and we were unable to recover it. 00:28:55.724 [2024-07-25 01:29:18.145087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.724 [2024-07-25 01:29:18.145119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.724 qpair failed and we were unable to recover it. 00:28:55.724 [2024-07-25 01:29:18.145568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.724 [2024-07-25 01:29:18.145583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.724 qpair failed and we were unable to recover it. 00:28:55.724 [2024-07-25 01:29:18.146004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.724 [2024-07-25 01:29:18.146035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.724 qpair failed and we were unable to recover it. 00:28:55.724 [2024-07-25 01:29:18.146515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.724 [2024-07-25 01:29:18.146546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.724 qpair failed and we were unable to recover it. 00:28:55.724 [2024-07-25 01:29:18.146989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.724 [2024-07-25 01:29:18.147019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.724 qpair failed and we were unable to recover it. 00:28:55.724 [2024-07-25 01:29:18.147339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.724 [2024-07-25 01:29:18.147371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.724 qpair failed and we were unable to recover it. 00:28:55.724 [2024-07-25 01:29:18.147837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.724 [2024-07-25 01:29:18.147852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.724 qpair failed and we were unable to recover it. 00:28:55.724 [2024-07-25 01:29:18.148231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.724 [2024-07-25 01:29:18.148247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.724 qpair failed and we were unable to recover it. 00:28:55.724 [2024-07-25 01:29:18.148678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.724 [2024-07-25 01:29:18.148710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.724 qpair failed and we were unable to recover it. 00:28:55.724 [2024-07-25 01:29:18.149223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.724 [2024-07-25 01:29:18.149256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.724 qpair failed and we were unable to recover it. 00:28:55.724 [2024-07-25 01:29:18.149754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.724 [2024-07-25 01:29:18.149785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.724 qpair failed and we were unable to recover it. 00:28:55.724 [2024-07-25 01:29:18.150296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.724 [2024-07-25 01:29:18.150329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.724 qpair failed and we were unable to recover it. 00:28:55.724 [2024-07-25 01:29:18.150721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.724 [2024-07-25 01:29:18.150752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.724 qpair failed and we were unable to recover it. 00:28:55.724 [2024-07-25 01:29:18.151125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.724 [2024-07-25 01:29:18.151158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.724 qpair failed and we were unable to recover it. 00:28:55.724 [2024-07-25 01:29:18.151678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.724 [2024-07-25 01:29:18.151708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.724 qpair failed and we were unable to recover it. 00:28:55.724 [2024-07-25 01:29:18.152226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.724 [2024-07-25 01:29:18.152259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.724 qpair failed and we were unable to recover it. 00:28:55.724 [2024-07-25 01:29:18.152704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.724 [2024-07-25 01:29:18.152735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.724 qpair failed and we were unable to recover it. 00:28:55.724 [2024-07-25 01:29:18.153232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.724 [2024-07-25 01:29:18.153248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.724 qpair failed and we were unable to recover it. 00:28:55.724 [2024-07-25 01:29:18.153602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.724 [2024-07-25 01:29:18.153632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.724 qpair failed and we were unable to recover it. 00:28:55.724 [2024-07-25 01:29:18.153982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.724 [2024-07-25 01:29:18.154012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.724 qpair failed and we were unable to recover it. 00:28:55.724 [2024-07-25 01:29:18.154452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.724 [2024-07-25 01:29:18.154489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.724 qpair failed and we were unable to recover it. 00:28:55.724 [2024-07-25 01:29:18.154704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.724 [2024-07-25 01:29:18.154735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.724 qpair failed and we were unable to recover it. 00:28:55.724 [2024-07-25 01:29:18.155186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.724 [2024-07-25 01:29:18.155202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.724 qpair failed and we were unable to recover it. 00:28:55.724 [2024-07-25 01:29:18.155447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.724 [2024-07-25 01:29:18.155462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.724 qpair failed and we were unable to recover it. 00:28:55.724 [2024-07-25 01:29:18.155812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.724 [2024-07-25 01:29:18.155828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.724 qpair failed and we were unable to recover it. 00:28:55.724 [2024-07-25 01:29:18.156314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.725 [2024-07-25 01:29:18.156330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.725 qpair failed and we were unable to recover it. 00:28:55.725 [2024-07-25 01:29:18.156736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.725 [2024-07-25 01:29:18.156767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.725 qpair failed and we were unable to recover it. 00:28:55.725 [2024-07-25 01:29:18.157215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.725 [2024-07-25 01:29:18.157247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.725 qpair failed and we were unable to recover it. 00:28:55.725 [2024-07-25 01:29:18.157679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.725 [2024-07-25 01:29:18.157711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.725 qpair failed and we were unable to recover it. 00:28:55.725 [2024-07-25 01:29:18.158179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.725 [2024-07-25 01:29:18.158211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.725 qpair failed and we were unable to recover it. 00:28:55.725 [2024-07-25 01:29:18.158655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.725 [2024-07-25 01:29:18.158686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.725 qpair failed and we were unable to recover it. 00:28:55.725 [2024-07-25 01:29:18.159055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.725 [2024-07-25 01:29:18.159071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.725 qpair failed and we were unable to recover it. 00:28:55.725 [2024-07-25 01:29:18.159500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.725 [2024-07-25 01:29:18.159515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.725 qpair failed and we were unable to recover it. 00:28:55.725 [2024-07-25 01:29:18.159868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.725 [2024-07-25 01:29:18.159883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.725 qpair failed and we were unable to recover it. 00:28:55.725 [2024-07-25 01:29:18.160290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.725 [2024-07-25 01:29:18.160323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.725 qpair failed and we were unable to recover it. 00:28:55.725 [2024-07-25 01:29:18.160780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.725 [2024-07-25 01:29:18.160811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.725 qpair failed and we were unable to recover it. 00:28:55.725 [2024-07-25 01:29:18.161136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.725 [2024-07-25 01:29:18.161152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.725 qpair failed and we were unable to recover it. 00:28:55.725 [2024-07-25 01:29:18.161502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.725 [2024-07-25 01:29:18.161533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.725 qpair failed and we were unable to recover it. 00:28:55.725 [2024-07-25 01:29:18.161977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.725 [2024-07-25 01:29:18.162007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.725 qpair failed and we were unable to recover it. 00:28:55.725 [2024-07-25 01:29:18.162422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.725 [2024-07-25 01:29:18.162455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.725 qpair failed and we were unable to recover it. 00:28:55.725 [2024-07-25 01:29:18.162972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.725 [2024-07-25 01:29:18.163003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.725 qpair failed and we were unable to recover it. 00:28:55.725 [2024-07-25 01:29:18.163444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.725 [2024-07-25 01:29:18.163476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.725 qpair failed and we were unable to recover it. 00:28:55.725 [2024-07-25 01:29:18.164002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.725 [2024-07-25 01:29:18.164017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.725 qpair failed and we were unable to recover it. 00:28:55.725 [2024-07-25 01:29:18.164375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.725 [2024-07-25 01:29:18.164407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.725 qpair failed and we were unable to recover it. 00:28:55.725 [2024-07-25 01:29:18.164933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.725 [2024-07-25 01:29:18.164964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.725 qpair failed and we were unable to recover it. 00:28:55.725 [2024-07-25 01:29:18.165492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.725 [2024-07-25 01:29:18.165525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.725 qpair failed and we were unable to recover it. 00:28:55.725 [2024-07-25 01:29:18.165710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.725 [2024-07-25 01:29:18.165741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.725 qpair failed and we were unable to recover it. 00:28:55.725 [2024-07-25 01:29:18.166271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.725 [2024-07-25 01:29:18.166304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.725 qpair failed and we were unable to recover it. 00:28:55.725 [2024-07-25 01:29:18.166751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.725 [2024-07-25 01:29:18.166781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.725 qpair failed and we were unable to recover it. 00:28:55.725 [2024-07-25 01:29:18.167303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.725 [2024-07-25 01:29:18.167335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.725 qpair failed and we were unable to recover it. 00:28:55.725 [2024-07-25 01:29:18.167781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.725 [2024-07-25 01:29:18.167811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.725 qpair failed and we were unable to recover it. 00:28:55.725 [2024-07-25 01:29:18.168282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.725 [2024-07-25 01:29:18.168298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.725 qpair failed and we were unable to recover it. 00:28:55.725 [2024-07-25 01:29:18.168765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.725 [2024-07-25 01:29:18.168797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.725 qpair failed and we were unable to recover it. 00:28:55.725 [2024-07-25 01:29:18.169237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.725 [2024-07-25 01:29:18.169282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.725 qpair failed and we were unable to recover it. 00:28:55.725 [2024-07-25 01:29:18.169794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.725 [2024-07-25 01:29:18.169825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.725 qpair failed and we were unable to recover it. 00:28:55.725 [2024-07-25 01:29:18.170273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.725 [2024-07-25 01:29:18.170305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.725 qpair failed and we were unable to recover it. 00:28:55.725 [2024-07-25 01:29:18.170498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.725 [2024-07-25 01:29:18.170528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.725 qpair failed and we were unable to recover it. 00:28:55.725 [2024-07-25 01:29:18.170970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.725 [2024-07-25 01:29:18.171000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.725 qpair failed and we were unable to recover it. 00:28:55.725 [2024-07-25 01:29:18.171445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.725 [2024-07-25 01:29:18.171476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.725 qpair failed and we were unable to recover it. 00:28:55.725 [2024-07-25 01:29:18.171927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.725 [2024-07-25 01:29:18.171959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.725 qpair failed and we were unable to recover it. 00:28:55.725 [2024-07-25 01:29:18.172432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.725 [2024-07-25 01:29:18.172468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.725 qpair failed and we were unable to recover it. 00:28:55.725 [2024-07-25 01:29:18.172934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.725 [2024-07-25 01:29:18.172965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.725 qpair failed and we were unable to recover it. 00:28:55.725 [2024-07-25 01:29:18.173344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.725 [2024-07-25 01:29:18.173377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.726 qpair failed and we were unable to recover it. 00:28:55.726 [2024-07-25 01:29:18.173808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.726 [2024-07-25 01:29:18.173839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.726 qpair failed and we were unable to recover it. 00:28:55.726 [2024-07-25 01:29:18.174215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.726 [2024-07-25 01:29:18.174248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.726 qpair failed and we were unable to recover it. 00:28:55.726 [2024-07-25 01:29:18.174687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.726 [2024-07-25 01:29:18.174718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.726 qpair failed and we were unable to recover it. 00:28:55.726 [2024-07-25 01:29:18.175175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.726 [2024-07-25 01:29:18.175207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.726 qpair failed and we were unable to recover it. 00:28:55.726 [2024-07-25 01:29:18.175595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.726 [2024-07-25 01:29:18.175630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.726 qpair failed and we were unable to recover it. 00:28:55.726 [2024-07-25 01:29:18.175807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.726 [2024-07-25 01:29:18.175838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.726 qpair failed and we were unable to recover it. 00:28:55.726 [2024-07-25 01:29:18.176362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.726 [2024-07-25 01:29:18.176395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.726 qpair failed and we were unable to recover it. 00:28:55.726 [2024-07-25 01:29:18.176777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.726 [2024-07-25 01:29:18.176808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.726 qpair failed and we were unable to recover it. 00:28:55.726 [2024-07-25 01:29:18.177252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.726 [2024-07-25 01:29:18.177285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.726 qpair failed and we were unable to recover it. 00:28:55.726 [2024-07-25 01:29:18.177734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.726 [2024-07-25 01:29:18.177765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.726 qpair failed and we were unable to recover it. 00:28:55.726 [2024-07-25 01:29:18.178197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.726 [2024-07-25 01:29:18.178230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.726 qpair failed and we were unable to recover it. 00:28:55.726 [2024-07-25 01:29:18.178758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.726 [2024-07-25 01:29:18.178789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.726 qpair failed and we were unable to recover it. 00:28:55.726 [2024-07-25 01:29:18.179240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.726 [2024-07-25 01:29:18.179273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.726 qpair failed and we were unable to recover it. 00:28:55.726 [2024-07-25 01:29:18.179720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.726 [2024-07-25 01:29:18.179752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.726 qpair failed and we were unable to recover it. 00:28:55.726 [2024-07-25 01:29:18.180186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.726 [2024-07-25 01:29:18.180218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.726 qpair failed and we were unable to recover it. 00:28:55.726 [2024-07-25 01:29:18.180525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.726 [2024-07-25 01:29:18.180555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.726 qpair failed and we were unable to recover it. 00:28:55.726 [2024-07-25 01:29:18.180988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.726 [2024-07-25 01:29:18.181018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.726 qpair failed and we were unable to recover it. 00:28:55.726 [2024-07-25 01:29:18.181558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.726 [2024-07-25 01:29:18.181591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.726 qpair failed and we were unable to recover it. 00:28:55.726 [2024-07-25 01:29:18.182115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.726 [2024-07-25 01:29:18.182148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.726 qpair failed and we were unable to recover it. 00:28:55.726 [2024-07-25 01:29:18.182647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.726 [2024-07-25 01:29:18.182678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.726 qpair failed and we were unable to recover it. 00:28:55.726 [2024-07-25 01:29:18.183138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.726 [2024-07-25 01:29:18.183169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.726 qpair failed and we were unable to recover it. 00:28:55.726 [2024-07-25 01:29:18.183675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.726 [2024-07-25 01:29:18.183707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.726 qpair failed and we were unable to recover it. 00:28:55.726 [2024-07-25 01:29:18.184228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.726 [2024-07-25 01:29:18.184259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.726 qpair failed and we were unable to recover it. 00:28:55.726 [2024-07-25 01:29:18.184707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.726 [2024-07-25 01:29:18.184739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.726 qpair failed and we were unable to recover it. 00:28:55.726 [2024-07-25 01:29:18.185184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.726 [2024-07-25 01:29:18.185216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.726 qpair failed and we were unable to recover it. 00:28:55.726 [2024-07-25 01:29:18.185735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.726 [2024-07-25 01:29:18.185766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.726 qpair failed and we were unable to recover it. 00:28:55.726 [2024-07-25 01:29:18.186290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.726 [2024-07-25 01:29:18.186322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.726 qpair failed and we were unable to recover it. 00:28:55.726 [2024-07-25 01:29:18.186508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.726 [2024-07-25 01:29:18.186539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.726 qpair failed and we were unable to recover it. 00:28:55.726 [2024-07-25 01:29:18.186889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.726 [2024-07-25 01:29:18.186904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.726 qpair failed and we were unable to recover it. 00:28:55.726 [2024-07-25 01:29:18.187304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.726 [2024-07-25 01:29:18.187320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.726 qpair failed and we were unable to recover it. 00:28:55.726 [2024-07-25 01:29:18.187794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.726 [2024-07-25 01:29:18.187825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.726 qpair failed and we were unable to recover it. 00:28:55.726 [2024-07-25 01:29:18.188280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.726 [2024-07-25 01:29:18.188295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.726 qpair failed and we were unable to recover it. 00:28:55.726 [2024-07-25 01:29:18.188698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.726 [2024-07-25 01:29:18.188729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.726 qpair failed and we were unable to recover it. 00:28:55.726 [2024-07-25 01:29:18.189212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.726 [2024-07-25 01:29:18.189244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.726 qpair failed and we were unable to recover it. 00:28:55.726 [2024-07-25 01:29:18.189719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.726 [2024-07-25 01:29:18.189750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.726 qpair failed and we were unable to recover it. 00:28:55.726 [2024-07-25 01:29:18.190211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.726 [2024-07-25 01:29:18.190243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.726 qpair failed and we were unable to recover it. 00:28:55.726 [2024-07-25 01:29:18.190689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.726 [2024-07-25 01:29:18.190704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.726 qpair failed and we were unable to recover it. 00:28:55.726 [2024-07-25 01:29:18.191116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.727 [2024-07-25 01:29:18.191135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.727 qpair failed and we were unable to recover it. 00:28:55.727 [2024-07-25 01:29:18.191539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.727 [2024-07-25 01:29:18.191554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.727 qpair failed and we were unable to recover it. 00:28:55.727 [2024-07-25 01:29:18.192014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.727 [2024-07-25 01:29:18.192029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.727 qpair failed and we were unable to recover it. 00:28:55.727 [2024-07-25 01:29:18.192401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.727 [2024-07-25 01:29:18.192433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.727 qpair failed and we were unable to recover it. 00:28:55.996 [2024-07-25 01:29:18.192767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.996 [2024-07-25 01:29:18.192800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.996 qpair failed and we were unable to recover it. 00:28:55.996 [2024-07-25 01:29:18.193239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.996 [2024-07-25 01:29:18.193271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.996 qpair failed and we were unable to recover it. 00:28:55.996 [2024-07-25 01:29:18.193720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.996 [2024-07-25 01:29:18.193751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.996 qpair failed and we were unable to recover it. 00:28:55.996 [2024-07-25 01:29:18.194156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.996 [2024-07-25 01:29:18.194189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.996 qpair failed and we were unable to recover it. 00:28:55.996 [2024-07-25 01:29:18.194570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.996 [2024-07-25 01:29:18.194602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.996 qpair failed and we were unable to recover it. 00:28:55.997 [2024-07-25 01:29:18.195054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.997 [2024-07-25 01:29:18.195087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.997 qpair failed and we were unable to recover it. 00:28:55.997 [2024-07-25 01:29:18.195299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.997 [2024-07-25 01:29:18.195314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.997 qpair failed and we were unable to recover it. 00:28:55.997 [2024-07-25 01:29:18.195704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.997 [2024-07-25 01:29:18.195734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.997 qpair failed and we were unable to recover it. 00:28:55.997 [2024-07-25 01:29:18.196107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.997 [2024-07-25 01:29:18.196139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.997 qpair failed and we were unable to recover it. 00:28:55.997 [2024-07-25 01:29:18.196583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.997 [2024-07-25 01:29:18.196613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.997 qpair failed and we were unable to recover it. 00:28:55.997 [2024-07-25 01:29:18.197120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.997 [2024-07-25 01:29:18.197153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.997 qpair failed and we were unable to recover it. 00:28:55.997 [2024-07-25 01:29:18.197592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.997 [2024-07-25 01:29:18.197622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.997 qpair failed and we were unable to recover it. 00:28:55.997 [2024-07-25 01:29:18.198056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.997 [2024-07-25 01:29:18.198072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.997 qpair failed and we were unable to recover it. 00:28:55.997 [2024-07-25 01:29:18.198580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.997 [2024-07-25 01:29:18.198610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.997 qpair failed and we were unable to recover it. 00:28:55.997 [2024-07-25 01:29:18.199100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.997 [2024-07-25 01:29:18.199131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.997 qpair failed and we were unable to recover it. 00:28:55.997 [2024-07-25 01:29:18.199604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.997 [2024-07-25 01:29:18.199635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.997 qpair failed and we were unable to recover it. 00:28:55.997 [2024-07-25 01:29:18.200083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.997 [2024-07-25 01:29:18.200114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.997 qpair failed and we were unable to recover it. 00:28:55.997 [2024-07-25 01:29:18.200556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.997 [2024-07-25 01:29:18.200587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.997 qpair failed and we were unable to recover it. 00:28:55.997 [2024-07-25 01:29:18.200972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.997 [2024-07-25 01:29:18.201004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.997 qpair failed and we were unable to recover it. 00:28:55.997 [2024-07-25 01:29:18.201507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.997 [2024-07-25 01:29:18.201539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.997 qpair failed and we were unable to recover it. 00:28:55.997 [2024-07-25 01:29:18.201980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.997 [2024-07-25 01:29:18.202012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.997 qpair failed and we were unable to recover it. 00:28:55.997 [2024-07-25 01:29:18.202409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.997 [2024-07-25 01:29:18.202441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.997 qpair failed and we were unable to recover it. 00:28:55.997 [2024-07-25 01:29:18.202907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.997 [2024-07-25 01:29:18.202938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.997 qpair failed and we were unable to recover it. 00:28:55.997 [2024-07-25 01:29:18.203396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.997 [2024-07-25 01:29:18.203429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.997 qpair failed and we were unable to recover it. 00:28:55.997 [2024-07-25 01:29:18.203947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.997 [2024-07-25 01:29:18.203979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.997 qpair failed and we were unable to recover it. 00:28:55.997 [2024-07-25 01:29:18.204433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.997 [2024-07-25 01:29:18.204464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.997 qpair failed and we were unable to recover it. 00:28:55.997 [2024-07-25 01:29:18.205000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.997 [2024-07-25 01:29:18.205031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.997 qpair failed and we were unable to recover it. 00:28:55.997 [2024-07-25 01:29:18.205500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.997 [2024-07-25 01:29:18.205531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.997 qpair failed and we were unable to recover it. 00:28:55.997 [2024-07-25 01:29:18.206029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.997 [2024-07-25 01:29:18.206069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.997 qpair failed and we were unable to recover it. 00:28:55.997 [2024-07-25 01:29:18.206639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.997 [2024-07-25 01:29:18.206680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.997 qpair failed and we were unable to recover it. 00:28:55.997 [2024-07-25 01:29:18.207157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.997 [2024-07-25 01:29:18.207189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.997 qpair failed and we were unable to recover it. 00:28:55.997 [2024-07-25 01:29:18.207633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.997 [2024-07-25 01:29:18.207664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.997 qpair failed and we were unable to recover it. 00:28:55.997 [2024-07-25 01:29:18.208053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.997 [2024-07-25 01:29:18.208086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.997 qpair failed and we were unable to recover it. 00:28:55.997 [2024-07-25 01:29:18.208594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.997 [2024-07-25 01:29:18.208625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.997 qpair failed and we were unable to recover it. 00:28:55.997 [2024-07-25 01:29:18.209075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.997 [2024-07-25 01:29:18.209107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.997 qpair failed and we were unable to recover it. 00:28:55.997 [2024-07-25 01:29:18.209552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.997 [2024-07-25 01:29:18.209582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.997 qpair failed and we were unable to recover it. 00:28:55.997 [2024-07-25 01:29:18.209962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.997 [2024-07-25 01:29:18.209999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.997 qpair failed and we were unable to recover it. 00:28:55.997 [2024-07-25 01:29:18.210530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.997 [2024-07-25 01:29:18.210563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.997 qpair failed and we were unable to recover it. 00:28:55.997 [2024-07-25 01:29:18.211085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.997 [2024-07-25 01:29:18.211117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.997 qpair failed and we were unable to recover it. 00:28:55.997 [2024-07-25 01:29:18.211682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.997 [2024-07-25 01:29:18.211713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.997 qpair failed and we were unable to recover it. 00:28:55.997 [2024-07-25 01:29:18.212099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.997 [2024-07-25 01:29:18.212131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.997 qpair failed and we were unable to recover it. 00:28:55.997 [2024-07-25 01:29:18.212541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.997 [2024-07-25 01:29:18.212573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.997 qpair failed and we were unable to recover it. 00:28:55.997 [2024-07-25 01:29:18.213068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-07-25 01:29:18.213100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-07-25 01:29:18.213616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-07-25 01:29:18.213646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-07-25 01:29:18.214117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-07-25 01:29:18.214149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-07-25 01:29:18.214602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-07-25 01:29:18.214633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-07-25 01:29:18.215152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-07-25 01:29:18.215183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-07-25 01:29:18.215578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-07-25 01:29:18.215609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-07-25 01:29:18.216099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-07-25 01:29:18.216130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-07-25 01:29:18.216583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-07-25 01:29:18.216615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-07-25 01:29:18.217020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-07-25 01:29:18.217080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-07-25 01:29:18.217519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-07-25 01:29:18.217550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-07-25 01:29:18.218054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-07-25 01:29:18.218085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-07-25 01:29:18.218482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-07-25 01:29:18.218497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-07-25 01:29:18.218923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-07-25 01:29:18.218954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-07-25 01:29:18.219435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-07-25 01:29:18.219467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-07-25 01:29:18.219915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-07-25 01:29:18.219946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-07-25 01:29:18.220425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-07-25 01:29:18.220441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-07-25 01:29:18.221036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-07-25 01:29:18.221056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-07-25 01:29:18.222069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-07-25 01:29:18.222095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-07-25 01:29:18.222571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-07-25 01:29:18.222587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-07-25 01:29:18.223080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-07-25 01:29:18.223114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-07-25 01:29:18.223567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-07-25 01:29:18.223598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-07-25 01:29:18.223801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-07-25 01:29:18.223832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-07-25 01:29:18.224282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-07-25 01:29:18.224298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-07-25 01:29:18.224721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-07-25 01:29:18.224736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-07-25 01:29:18.225202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-07-25 01:29:18.225218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-07-25 01:29:18.225630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-07-25 01:29:18.225660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-07-25 01:29:18.226182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-07-25 01:29:18.226214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-07-25 01:29:18.226655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-07-25 01:29:18.226685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-07-25 01:29:18.227200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-07-25 01:29:18.227233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-07-25 01:29:18.227700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-07-25 01:29:18.227730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-07-25 01:29:18.228116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-07-25 01:29:18.228152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-07-25 01:29:18.228588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-07-25 01:29:18.228618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-07-25 01:29:18.229137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-07-25 01:29:18.229169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-07-25 01:29:18.229664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-07-25 01:29:18.229695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-07-25 01:29:18.230192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-07-25 01:29:18.230229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-07-25 01:29:18.230686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-07-25 01:29:18.230718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-07-25 01:29:18.231170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-07-25 01:29:18.231185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.998 [2024-07-25 01:29:18.231604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.998 [2024-07-25 01:29:18.231634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.998 qpair failed and we were unable to recover it. 00:28:55.999 [2024-07-25 01:29:18.232175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-07-25 01:29:18.232206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-07-25 01:29:18.232662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-07-25 01:29:18.232693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-07-25 01:29:18.233216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-07-25 01:29:18.233249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-07-25 01:29:18.233776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-07-25 01:29:18.233807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-07-25 01:29:18.234247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-07-25 01:29:18.234278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-07-25 01:29:18.234800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-07-25 01:29:18.234831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-07-25 01:29:18.235331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-07-25 01:29:18.235363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-07-25 01:29:18.235806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-07-25 01:29:18.235837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-07-25 01:29:18.236228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-07-25 01:29:18.236259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-07-25 01:29:18.236507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-07-25 01:29:18.236538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-07-25 01:29:18.236978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-07-25 01:29:18.236993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-07-25 01:29:18.237395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-07-25 01:29:18.237428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-07-25 01:29:18.237925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-07-25 01:29:18.237956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-07-25 01:29:18.238455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-07-25 01:29:18.238487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-07-25 01:29:18.239007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-07-25 01:29:18.239038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-07-25 01:29:18.239485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-07-25 01:29:18.239516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-07-25 01:29:18.240059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-07-25 01:29:18.240090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-07-25 01:29:18.240476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-07-25 01:29:18.240507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-07-25 01:29:18.241027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-07-25 01:29:18.241070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-07-25 01:29:18.241578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-07-25 01:29:18.241609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-07-25 01:29:18.242059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-07-25 01:29:18.242091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-07-25 01:29:18.242617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-07-25 01:29:18.242648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-07-25 01:29:18.243143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-07-25 01:29:18.243174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-07-25 01:29:18.243701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-07-25 01:29:18.243732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-07-25 01:29:18.244194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-07-25 01:29:18.244227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-07-25 01:29:18.244728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-07-25 01:29:18.244758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-07-25 01:29:18.245249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-07-25 01:29:18.245265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-07-25 01:29:18.245671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-07-25 01:29:18.245701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-07-25 01:29:18.246154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-07-25 01:29:18.246169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-07-25 01:29:18.246637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-07-25 01:29:18.246668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-07-25 01:29:18.247172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-07-25 01:29:18.247205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-07-25 01:29:18.247641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-07-25 01:29:18.247672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-07-25 01:29:18.248068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-07-25 01:29:18.248100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-07-25 01:29:18.248622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-07-25 01:29:18.248653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-07-25 01:29:18.249132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-07-25 01:29:18.249164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-07-25 01:29:18.249598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-07-25 01:29:18.249628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:55.999 [2024-07-25 01:29:18.250069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.999 [2024-07-25 01:29:18.250106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:55.999 qpair failed and we were unable to recover it. 00:28:56.000 [2024-07-25 01:29:18.250476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-07-25 01:29:18.250506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-07-25 01:29:18.250955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-07-25 01:29:18.250985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-07-25 01:29:18.251467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-07-25 01:29:18.251499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-07-25 01:29:18.251889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-07-25 01:29:18.251921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-07-25 01:29:18.252316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-07-25 01:29:18.252349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-07-25 01:29:18.252845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-07-25 01:29:18.252875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-07-25 01:29:18.253420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-07-25 01:29:18.253452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-07-25 01:29:18.253998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-07-25 01:29:18.254029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-07-25 01:29:18.254426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-07-25 01:29:18.254458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-07-25 01:29:18.254702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-07-25 01:29:18.254733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-07-25 01:29:18.255262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-07-25 01:29:18.255293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-07-25 01:29:18.255733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-07-25 01:29:18.255765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-07-25 01:29:18.256280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-07-25 01:29:18.256311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-07-25 01:29:18.256826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-07-25 01:29:18.256857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-07-25 01:29:18.257355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-07-25 01:29:18.257387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-07-25 01:29:18.257772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-07-25 01:29:18.257803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-07-25 01:29:18.258352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-07-25 01:29:18.258384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-07-25 01:29:18.258907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-07-25 01:29:18.258937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-07-25 01:29:18.259381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-07-25 01:29:18.259413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-07-25 01:29:18.259879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-07-25 01:29:18.259910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-07-25 01:29:18.260406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-07-25 01:29:18.260437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-07-25 01:29:18.260876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-07-25 01:29:18.260906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-07-25 01:29:18.261429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-07-25 01:29:18.261462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-07-25 01:29:18.261906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-07-25 01:29:18.261936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-07-25 01:29:18.262374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-07-25 01:29:18.262406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-07-25 01:29:18.262856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-07-25 01:29:18.262886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-07-25 01:29:18.263410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-07-25 01:29:18.263442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-07-25 01:29:18.263694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-07-25 01:29:18.263724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-07-25 01:29:18.264167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-07-25 01:29:18.264200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-07-25 01:29:18.264694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-07-25 01:29:18.264725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-07-25 01:29:18.265218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-07-25 01:29:18.265234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-07-25 01:29:18.265628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-07-25 01:29:18.265659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-07-25 01:29:18.266155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-07-25 01:29:18.266186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-07-25 01:29:18.266685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-07-25 01:29:18.266716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-07-25 01:29:18.267176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-07-25 01:29:18.267208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-07-25 01:29:18.267706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-07-25 01:29:18.267737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-07-25 01:29:18.268117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.000 [2024-07-25 01:29:18.268152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.000 qpair failed and we were unable to recover it. 00:28:56.000 [2024-07-25 01:29:18.268582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-07-25 01:29:18.268611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-07-25 01:29:18.269006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-07-25 01:29:18.269037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-07-25 01:29:18.269446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-07-25 01:29:18.269482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-07-25 01:29:18.269945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-07-25 01:29:18.269976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-07-25 01:29:18.270477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-07-25 01:29:18.270508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-07-25 01:29:18.271026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-07-25 01:29:18.271065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-07-25 01:29:18.271527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-07-25 01:29:18.271557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-07-25 01:29:18.272064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-07-25 01:29:18.272096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-07-25 01:29:18.272638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-07-25 01:29:18.272670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-07-25 01:29:18.273156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-07-25 01:29:18.273187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-07-25 01:29:18.273624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-07-25 01:29:18.273655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-07-25 01:29:18.274164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-07-25 01:29:18.274204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-07-25 01:29:18.274626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-07-25 01:29:18.274640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-07-25 01:29:18.275166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-07-25 01:29:18.275181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-07-25 01:29:18.275611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-07-25 01:29:18.275643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-07-25 01:29:18.276090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-07-25 01:29:18.276122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-07-25 01:29:18.276618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-07-25 01:29:18.276633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-07-25 01:29:18.277129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-07-25 01:29:18.277161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-07-25 01:29:18.277640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-07-25 01:29:18.277671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-07-25 01:29:18.278068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-07-25 01:29:18.278106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-07-25 01:29:18.278618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-07-25 01:29:18.278632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-07-25 01:29:18.279148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-07-25 01:29:18.279179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-07-25 01:29:18.279624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-07-25 01:29:18.279654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-07-25 01:29:18.280149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-07-25 01:29:18.280181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-07-25 01:29:18.280676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-07-25 01:29:18.280706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-07-25 01:29:18.281244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-07-25 01:29:18.281276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-07-25 01:29:18.281825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-07-25 01:29:18.281856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-07-25 01:29:18.282285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-07-25 01:29:18.282317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-07-25 01:29:18.282758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-07-25 01:29:18.282789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-07-25 01:29:18.283324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.001 [2024-07-25 01:29:18.283356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.001 qpair failed and we were unable to recover it. 00:28:56.001 [2024-07-25 01:29:18.283923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-07-25 01:29:18.283955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-07-25 01:29:18.284455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-07-25 01:29:18.284487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-07-25 01:29:18.284882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-07-25 01:29:18.284912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-07-25 01:29:18.285436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-07-25 01:29:18.285467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-07-25 01:29:18.285912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-07-25 01:29:18.285943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-07-25 01:29:18.286466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-07-25 01:29:18.286498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-07-25 01:29:18.286999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-07-25 01:29:18.287030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-07-25 01:29:18.287437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-07-25 01:29:18.287469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-07-25 01:29:18.288017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-07-25 01:29:18.288057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-07-25 01:29:18.288581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-07-25 01:29:18.288612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-07-25 01:29:18.289129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-07-25 01:29:18.289161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-07-25 01:29:18.289634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-07-25 01:29:18.289664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-07-25 01:29:18.290105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-07-25 01:29:18.290142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-07-25 01:29:18.290642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-07-25 01:29:18.290673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-07-25 01:29:18.291170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-07-25 01:29:18.291202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-07-25 01:29:18.291606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-07-25 01:29:18.291637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-07-25 01:29:18.292090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-07-25 01:29:18.292122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-07-25 01:29:18.292552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-07-25 01:29:18.292583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-07-25 01:29:18.293080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-07-25 01:29:18.293113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-07-25 01:29:18.293569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-07-25 01:29:18.293584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-07-25 01:29:18.294060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-07-25 01:29:18.294075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-07-25 01:29:18.295351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-07-25 01:29:18.295381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-07-25 01:29:18.295896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-07-25 01:29:18.295930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-07-25 01:29:18.296326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-07-25 01:29:18.296359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-07-25 01:29:18.296810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-07-25 01:29:18.296841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-07-25 01:29:18.297301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-07-25 01:29:18.297316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-07-25 01:29:18.297721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-07-25 01:29:18.297738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-07-25 01:29:18.298158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-07-25 01:29:18.298190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-07-25 01:29:18.298586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-07-25 01:29:18.298616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-07-25 01:29:18.299088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-07-25 01:29:18.299121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-07-25 01:29:18.299640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-07-25 01:29:18.299655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-07-25 01:29:18.300093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-07-25 01:29:18.300126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-07-25 01:29:18.300594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-07-25 01:29:18.300625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-07-25 01:29:18.301147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-07-25 01:29:18.301178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-07-25 01:29:18.301558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-07-25 01:29:18.301589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-07-25 01:29:18.302017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-07-25 01:29:18.302062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-07-25 01:29:18.302562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-07-25 01:29:18.302593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-07-25 01:29:18.302988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.002 [2024-07-25 01:29:18.303018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.002 qpair failed and we were unable to recover it. 00:28:56.002 [2024-07-25 01:29:18.303416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-07-25 01:29:18.303449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-07-25 01:29:18.303977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-07-25 01:29:18.303993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-07-25 01:29:18.304464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-07-25 01:29:18.304497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-07-25 01:29:18.304960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-07-25 01:29:18.304990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-07-25 01:29:18.305386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-07-25 01:29:18.305418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-07-25 01:29:18.305959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-07-25 01:29:18.305989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-07-25 01:29:18.306522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-07-25 01:29:18.306553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-07-25 01:29:18.306992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-07-25 01:29:18.307022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-07-25 01:29:18.307493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-07-25 01:29:18.307525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-07-25 01:29:18.307922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-07-25 01:29:18.307953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-07-25 01:29:18.308398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-07-25 01:29:18.308430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-07-25 01:29:18.308953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-07-25 01:29:18.308983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-07-25 01:29:18.309388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-07-25 01:29:18.309420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-07-25 01:29:18.309859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-07-25 01:29:18.309888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-07-25 01:29:18.310387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-07-25 01:29:18.310424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-07-25 01:29:18.310819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-07-25 01:29:18.310850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-07-25 01:29:18.311358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-07-25 01:29:18.311373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-07-25 01:29:18.311785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-07-25 01:29:18.311799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-07-25 01:29:18.312205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-07-25 01:29:18.312220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-07-25 01:29:18.312620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-07-25 01:29:18.312650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-07-25 01:29:18.313172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-07-25 01:29:18.313203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-07-25 01:29:18.313597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-07-25 01:29:18.313627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-07-25 01:29:18.314076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-07-25 01:29:18.314107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-07-25 01:29:18.314630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-07-25 01:29:18.314661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-07-25 01:29:18.314908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-07-25 01:29:18.314939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-07-25 01:29:18.315459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-07-25 01:29:18.315491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-07-25 01:29:18.315954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-07-25 01:29:18.315993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-07-25 01:29:18.316488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-07-25 01:29:18.316520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-07-25 01:29:18.316976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-07-25 01:29:18.317006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-07-25 01:29:18.317484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-07-25 01:29:18.317517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-07-25 01:29:18.318015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-07-25 01:29:18.318055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-07-25 01:29:18.318589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-07-25 01:29:18.318620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-07-25 01:29:18.319010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-07-25 01:29:18.319040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-07-25 01:29:18.319549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-07-25 01:29:18.319580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-07-25 01:29:18.319966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-07-25 01:29:18.319997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-07-25 01:29:18.320466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-07-25 01:29:18.320498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-07-25 01:29:18.320998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-07-25 01:29:18.321029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.003 [2024-07-25 01:29:18.321486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.003 [2024-07-25 01:29:18.321518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.003 qpair failed and we were unable to recover it. 00:28:56.004 [2024-07-25 01:29:18.321887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-07-25 01:29:18.321917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-07-25 01:29:18.322345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-07-25 01:29:18.322361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-07-25 01:29:18.322804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-07-25 01:29:18.322835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-07-25 01:29:18.323278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-07-25 01:29:18.323311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-07-25 01:29:18.323763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-07-25 01:29:18.323793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-07-25 01:29:18.324185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-07-25 01:29:18.324200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-07-25 01:29:18.324665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-07-25 01:29:18.324696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-07-25 01:29:18.325151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-07-25 01:29:18.325183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-07-25 01:29:18.325683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-07-25 01:29:18.325713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-07-25 01:29:18.326111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-07-25 01:29:18.326126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-07-25 01:29:18.326529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-07-25 01:29:18.326543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-07-25 01:29:18.326964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-07-25 01:29:18.326979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-07-25 01:29:18.327471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-07-25 01:29:18.327502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-07-25 01:29:18.327975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-07-25 01:29:18.328006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-07-25 01:29:18.328513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-07-25 01:29:18.328545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-07-25 01:29:18.328973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-07-25 01:29:18.329004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-07-25 01:29:18.329480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-07-25 01:29:18.329530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-07-25 01:29:18.330064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-07-25 01:29:18.330096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-07-25 01:29:18.330596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-07-25 01:29:18.330627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-07-25 01:29:18.330991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-07-25 01:29:18.331022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-07-25 01:29:18.331477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-07-25 01:29:18.331509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-07-25 01:29:18.332008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-07-25 01:29:18.332038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-07-25 01:29:18.332492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-07-25 01:29:18.332523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-07-25 01:29:18.333036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-07-25 01:29:18.333078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-07-25 01:29:18.333595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-07-25 01:29:18.333625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-07-25 01:29:18.334030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-07-25 01:29:18.334073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-07-25 01:29:18.334321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-07-25 01:29:18.334351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-07-25 01:29:18.334733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-07-25 01:29:18.334764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-07-25 01:29:18.335225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-07-25 01:29:18.335258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-07-25 01:29:18.335723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-07-25 01:29:18.335753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-07-25 01:29:18.336282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-07-25 01:29:18.336314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-07-25 01:29:18.336818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-07-25 01:29:18.336833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-07-25 01:29:18.337309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-07-25 01:29:18.337342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-07-25 01:29:18.337788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-07-25 01:29:18.337819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-07-25 01:29:18.338209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-07-25 01:29:18.338241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-07-25 01:29:18.338769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-07-25 01:29:18.338799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-07-25 01:29:18.339234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-07-25 01:29:18.339266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.004 qpair failed and we were unable to recover it. 00:28:56.004 [2024-07-25 01:29:18.339706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.004 [2024-07-25 01:29:18.339736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-07-25 01:29:18.340164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-07-25 01:29:18.340196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-07-25 01:29:18.340742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-07-25 01:29:18.340773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-07-25 01:29:18.341214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-07-25 01:29:18.341246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-07-25 01:29:18.341632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-07-25 01:29:18.341662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-07-25 01:29:18.342182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-07-25 01:29:18.342214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-07-25 01:29:18.342717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-07-25 01:29:18.342748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-07-25 01:29:18.342944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-07-25 01:29:18.342974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-07-25 01:29:18.343497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-07-25 01:29:18.343528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-07-25 01:29:18.344054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-07-25 01:29:18.344085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-07-25 01:29:18.344583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-07-25 01:29:18.344613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-07-25 01:29:18.345112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-07-25 01:29:18.345144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-07-25 01:29:18.345575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-07-25 01:29:18.345606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-07-25 01:29:18.346077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-07-25 01:29:18.346109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-07-25 01:29:18.346586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-07-25 01:29:18.346616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-07-25 01:29:18.347087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-07-25 01:29:18.347119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-07-25 01:29:18.347649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-07-25 01:29:18.347679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-07-25 01:29:18.348248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-07-25 01:29:18.348281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-07-25 01:29:18.348777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-07-25 01:29:18.348808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-07-25 01:29:18.349307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-07-25 01:29:18.349344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-07-25 01:29:18.349796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-07-25 01:29:18.349826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-07-25 01:29:18.350366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-07-25 01:29:18.350399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-07-25 01:29:18.350862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-07-25 01:29:18.350892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-07-25 01:29:18.351359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-07-25 01:29:18.351390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-07-25 01:29:18.351932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-07-25 01:29:18.351962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-07-25 01:29:18.352527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-07-25 01:29:18.352558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-07-25 01:29:18.353028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-07-25 01:29:18.353076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-07-25 01:29:18.353571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-07-25 01:29:18.353601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-07-25 01:29:18.354107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-07-25 01:29:18.354140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-07-25 01:29:18.354659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-07-25 01:29:18.354689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-07-25 01:29:18.355161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-07-25 01:29:18.355193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-07-25 01:29:18.355644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-07-25 01:29:18.355674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-07-25 01:29:18.356193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-07-25 01:29:18.356226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-07-25 01:29:18.356767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-07-25 01:29:18.356798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-07-25 01:29:18.357319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-07-25 01:29:18.357334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-07-25 01:29:18.357800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-07-25 01:29:18.357831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-07-25 01:29:18.358279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-07-25 01:29:18.358311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-07-25 01:29:18.358774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.005 [2024-07-25 01:29:18.358804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.005 qpair failed and we were unable to recover it. 00:28:56.005 [2024-07-25 01:29:18.359251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-07-25 01:29:18.359283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-07-25 01:29:18.359702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-07-25 01:29:18.359733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-07-25 01:29:18.360194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-07-25 01:29:18.360226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-07-25 01:29:18.360747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-07-25 01:29:18.360777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-07-25 01:29:18.361275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-07-25 01:29:18.361307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-07-25 01:29:18.361828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-07-25 01:29:18.361858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-07-25 01:29:18.362421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-07-25 01:29:18.362453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-07-25 01:29:18.363021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-07-25 01:29:18.363061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-07-25 01:29:18.363543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-07-25 01:29:18.363575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-07-25 01:29:18.364121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-07-25 01:29:18.364153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-07-25 01:29:18.364681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-07-25 01:29:18.364711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-07-25 01:29:18.365221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-07-25 01:29:18.365254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-07-25 01:29:18.365719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-07-25 01:29:18.365748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-07-25 01:29:18.366189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-07-25 01:29:18.366220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-07-25 01:29:18.366741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-07-25 01:29:18.366771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-07-25 01:29:18.367295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-07-25 01:29:18.367327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-07-25 01:29:18.367763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-07-25 01:29:18.367793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-07-25 01:29:18.368311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-07-25 01:29:18.368343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-07-25 01:29:18.368893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-07-25 01:29:18.368923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-07-25 01:29:18.369428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-07-25 01:29:18.369469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-07-25 01:29:18.369926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-07-25 01:29:18.369941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-07-25 01:29:18.370346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-07-25 01:29:18.370378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-07-25 01:29:18.370895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-07-25 01:29:18.370928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-07-25 01:29:18.371485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-07-25 01:29:18.371517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-07-25 01:29:18.372011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-07-25 01:29:18.372050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-07-25 01:29:18.372541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-07-25 01:29:18.372572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-07-25 01:29:18.373086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-07-25 01:29:18.373102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-07-25 01:29:18.373518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-07-25 01:29:18.373532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-07-25 01:29:18.373946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-07-25 01:29:18.373961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-07-25 01:29:18.374426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-07-25 01:29:18.374441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-07-25 01:29:18.374944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-07-25 01:29:18.374958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-07-25 01:29:18.375514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.006 [2024-07-25 01:29:18.375532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.006 qpair failed and we were unable to recover it. 00:28:56.006 [2024-07-25 01:29:18.375935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-07-25 01:29:18.375950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-07-25 01:29:18.376312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-07-25 01:29:18.376335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-07-25 01:29:18.376820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-07-25 01:29:18.376835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-07-25 01:29:18.377316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-07-25 01:29:18.377332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-07-25 01:29:18.377767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-07-25 01:29:18.377798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-07-25 01:29:18.378249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-07-25 01:29:18.378265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-07-25 01:29:18.378702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-07-25 01:29:18.378734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-07-25 01:29:18.379263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-07-25 01:29:18.379295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-07-25 01:29:18.379836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-07-25 01:29:18.379851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-07-25 01:29:18.380318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-07-25 01:29:18.380334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-07-25 01:29:18.380813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-07-25 01:29:18.380828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-07-25 01:29:18.381187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-07-25 01:29:18.381203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-07-25 01:29:18.381701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-07-25 01:29:18.381732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-07-25 01:29:18.382286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-07-25 01:29:18.382302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-07-25 01:29:18.382816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-07-25 01:29:18.382846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-07-25 01:29:18.383415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-07-25 01:29:18.383431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-07-25 01:29:18.383945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-07-25 01:29:18.383981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-07-25 01:29:18.384514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-07-25 01:29:18.384546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-07-25 01:29:18.384937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-07-25 01:29:18.384952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-07-25 01:29:18.385362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-07-25 01:29:18.385394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-07-25 01:29:18.385860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-07-25 01:29:18.385890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-07-25 01:29:18.386328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-07-25 01:29:18.386343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-07-25 01:29:18.386761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-07-25 01:29:18.386776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-07-25 01:29:18.387243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-07-25 01:29:18.387258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-07-25 01:29:18.387691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-07-25 01:29:18.387707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-07-25 01:29:18.388140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-07-25 01:29:18.388171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-07-25 01:29:18.388637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-07-25 01:29:18.388652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-07-25 01:29:18.389079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-07-25 01:29:18.389111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-07-25 01:29:18.389567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-07-25 01:29:18.389598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-07-25 01:29:18.390104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-07-25 01:29:18.390136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-07-25 01:29:18.390709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-07-25 01:29:18.390724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-07-25 01:29:18.391255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-07-25 01:29:18.391271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-07-25 01:29:18.391758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-07-25 01:29:18.391773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-07-25 01:29:18.392133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-07-25 01:29:18.392148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-07-25 01:29:18.392562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-07-25 01:29:18.392577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-07-25 01:29:18.393078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-07-25 01:29:18.393110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-07-25 01:29:18.393700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-07-25 01:29:18.393732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.007 [2024-07-25 01:29:18.394295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.007 [2024-07-25 01:29:18.394327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.007 qpair failed and we were unable to recover it. 00:28:56.008 [2024-07-25 01:29:18.394724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-07-25 01:29:18.394740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-07-25 01:29:18.395206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-07-25 01:29:18.395251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-07-25 01:29:18.395743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-07-25 01:29:18.395758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-07-25 01:29:18.396181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-07-25 01:29:18.396212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-07-25 01:29:18.396723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-07-25 01:29:18.396755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-07-25 01:29:18.397165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-07-25 01:29:18.397197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-07-25 01:29:18.397699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-07-25 01:29:18.397730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-07-25 01:29:18.398187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-07-25 01:29:18.398219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-07-25 01:29:18.398777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-07-25 01:29:18.398792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-07-25 01:29:18.399312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-07-25 01:29:18.399344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-07-25 01:29:18.399803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-07-25 01:29:18.399833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-07-25 01:29:18.400344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-07-25 01:29:18.400377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-07-25 01:29:18.400917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-07-25 01:29:18.400932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-07-25 01:29:18.401454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-07-25 01:29:18.401486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-07-25 01:29:18.402025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-07-25 01:29:18.402066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-07-25 01:29:18.402537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-07-25 01:29:18.402567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-07-25 01:29:18.403055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-07-25 01:29:18.403087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-07-25 01:29:18.403638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-07-25 01:29:18.403653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-07-25 01:29:18.404152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-07-25 01:29:18.404171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-07-25 01:29:18.404638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-07-25 01:29:18.404654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-07-25 01:29:18.405020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-07-25 01:29:18.405063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-07-25 01:29:18.405513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-07-25 01:29:18.405544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-07-25 01:29:18.406040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-07-25 01:29:18.406062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-07-25 01:29:18.406558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-07-25 01:29:18.406589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-07-25 01:29:18.407147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-07-25 01:29:18.407179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-07-25 01:29:18.407621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-07-25 01:29:18.407636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-07-25 01:29:18.408147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-07-25 01:29:18.408179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-07-25 01:29:18.408659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-07-25 01:29:18.408674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-07-25 01:29:18.409091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-07-25 01:29:18.409123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-07-25 01:29:18.409646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-07-25 01:29:18.409661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-07-25 01:29:18.410134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-07-25 01:29:18.410166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-07-25 01:29:18.410696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-07-25 01:29:18.410712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-07-25 01:29:18.411092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-07-25 01:29:18.411109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-07-25 01:29:18.411591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-07-25 01:29:18.411621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-07-25 01:29:18.412106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-07-25 01:29:18.412138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-07-25 01:29:18.412589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-07-25 01:29:18.412620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-07-25 01:29:18.413103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-07-25 01:29:18.413135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.008 qpair failed and we were unable to recover it. 00:28:56.008 [2024-07-25 01:29:18.413669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.008 [2024-07-25 01:29:18.413701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-07-25 01:29:18.414199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-07-25 01:29:18.414232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-07-25 01:29:18.414708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-07-25 01:29:18.414739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-07-25 01:29:18.415296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-07-25 01:29:18.415312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-07-25 01:29:18.415804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-07-25 01:29:18.415835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-07-25 01:29:18.416284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-07-25 01:29:18.416315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-07-25 01:29:18.416861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-07-25 01:29:18.416892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-07-25 01:29:18.417435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-07-25 01:29:18.417451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-07-25 01:29:18.417879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-07-25 01:29:18.417911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-07-25 01:29:18.418383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-07-25 01:29:18.418415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-07-25 01:29:18.418863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-07-25 01:29:18.418878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-07-25 01:29:18.419266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-07-25 01:29:18.419282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-07-25 01:29:18.419699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-07-25 01:29:18.419715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-07-25 01:29:18.420155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-07-25 01:29:18.420171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-07-25 01:29:18.420668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-07-25 01:29:18.420699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-07-25 01:29:18.421211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-07-25 01:29:18.421244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-07-25 01:29:18.421767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-07-25 01:29:18.421783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-07-25 01:29:18.422199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-07-25 01:29:18.422231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-07-25 01:29:18.422785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-07-25 01:29:18.422816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-07-25 01:29:18.423270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-07-25 01:29:18.423302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-07-25 01:29:18.423827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-07-25 01:29:18.423858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-07-25 01:29:18.424327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-07-25 01:29:18.424366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-07-25 01:29:18.424885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-07-25 01:29:18.424916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-07-25 01:29:18.425360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-07-25 01:29:18.425392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-07-25 01:29:18.425921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-07-25 01:29:18.425953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-07-25 01:29:18.426502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-07-25 01:29:18.426534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-07-25 01:29:18.427068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-07-25 01:29:18.427101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-07-25 01:29:18.427578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-07-25 01:29:18.427611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-07-25 01:29:18.428141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-07-25 01:29:18.428174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-07-25 01:29:18.428728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-07-25 01:29:18.428760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-07-25 01:29:18.429243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-07-25 01:29:18.429275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-07-25 01:29:18.429684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-07-25 01:29:18.429716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-07-25 01:29:18.430222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-07-25 01:29:18.430254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-07-25 01:29:18.430788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-07-25 01:29:18.430819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-07-25 01:29:18.431271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-07-25 01:29:18.431304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-07-25 01:29:18.431715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-07-25 01:29:18.431747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-07-25 01:29:18.432252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-07-25 01:29:18.432285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-07-25 01:29:18.432789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.009 [2024-07-25 01:29:18.432820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.009 qpair failed and we were unable to recover it. 00:28:56.009 [2024-07-25 01:29:18.433381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-07-25 01:29:18.433415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-07-25 01:29:18.433857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-07-25 01:29:18.433872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-07-25 01:29:18.434381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-07-25 01:29:18.434415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-07-25 01:29:18.434979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-07-25 01:29:18.435009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-07-25 01:29:18.435556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-07-25 01:29:18.435588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-07-25 01:29:18.436112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-07-25 01:29:18.436146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-07-25 01:29:18.436719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-07-25 01:29:18.436750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-07-25 01:29:18.437291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-07-25 01:29:18.437324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-07-25 01:29:18.437803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-07-25 01:29:18.437835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-07-25 01:29:18.438384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-07-25 01:29:18.438417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-07-25 01:29:18.438956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-07-25 01:29:18.438987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-07-25 01:29:18.439514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-07-25 01:29:18.439546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-07-25 01:29:18.440065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-07-25 01:29:18.440098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-07-25 01:29:18.440654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-07-25 01:29:18.440685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-07-25 01:29:18.441150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-07-25 01:29:18.441183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-07-25 01:29:18.441628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-07-25 01:29:18.441659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-07-25 01:29:18.442176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-07-25 01:29:18.442208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-07-25 01:29:18.442695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-07-25 01:29:18.442726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-07-25 01:29:18.443172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-07-25 01:29:18.443205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-07-25 01:29:18.443740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-07-25 01:29:18.443772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-07-25 01:29:18.444354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-07-25 01:29:18.444386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-07-25 01:29:18.444965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-07-25 01:29:18.444996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-07-25 01:29:18.445552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-07-25 01:29:18.445584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-07-25 01:29:18.446036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-07-25 01:29:18.446083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-07-25 01:29:18.446590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-07-25 01:29:18.446621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-07-25 01:29:18.447179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-07-25 01:29:18.447212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-07-25 01:29:18.447744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-07-25 01:29:18.447775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-07-25 01:29:18.448338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-07-25 01:29:18.448354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-07-25 01:29:18.448808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-07-25 01:29:18.448839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-07-25 01:29:18.449311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-07-25 01:29:18.449343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-07-25 01:29:18.449898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-07-25 01:29:18.449930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-07-25 01:29:18.450399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-07-25 01:29:18.450432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.010 [2024-07-25 01:29:18.450941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.010 [2024-07-25 01:29:18.450972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.010 qpair failed and we were unable to recover it. 00:28:56.011 [2024-07-25 01:29:18.451436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-07-25 01:29:18.451470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-07-25 01:29:18.452005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-07-25 01:29:18.452037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-07-25 01:29:18.452602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-07-25 01:29:18.452634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-07-25 01:29:18.453160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-07-25 01:29:18.453193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-07-25 01:29:18.453742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-07-25 01:29:18.453774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-07-25 01:29:18.454312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-07-25 01:29:18.454344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-07-25 01:29:18.454920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-07-25 01:29:18.454952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-07-25 01:29:18.455430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-07-25 01:29:18.455462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-07-25 01:29:18.455925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-07-25 01:29:18.455975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-07-25 01:29:18.456494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-07-25 01:29:18.456535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-07-25 01:29:18.457025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-07-25 01:29:18.457041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-07-25 01:29:18.457552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-07-25 01:29:18.457584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-07-25 01:29:18.458119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-07-25 01:29:18.458152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-07-25 01:29:18.458678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-07-25 01:29:18.458709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-07-25 01:29:18.459261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-07-25 01:29:18.459294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-07-25 01:29:18.459853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-07-25 01:29:18.459884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-07-25 01:29:18.460447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-07-25 01:29:18.460479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-07-25 01:29:18.460932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-07-25 01:29:18.460964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-07-25 01:29:18.461516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-07-25 01:29:18.461549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-07-25 01:29:18.462097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-07-25 01:29:18.462130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-07-25 01:29:18.462700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-07-25 01:29:18.462732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-07-25 01:29:18.463179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-07-25 01:29:18.463211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-07-25 01:29:18.463670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-07-25 01:29:18.463702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-07-25 01:29:18.464248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-07-25 01:29:18.464281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-07-25 01:29:18.464789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-07-25 01:29:18.464820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-07-25 01:29:18.465372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-07-25 01:29:18.465404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-07-25 01:29:18.465926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-07-25 01:29:18.465958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-07-25 01:29:18.466516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-07-25 01:29:18.466549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-07-25 01:29:18.467112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-07-25 01:29:18.467145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-07-25 01:29:18.467689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-07-25 01:29:18.467720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-07-25 01:29:18.468278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-07-25 01:29:18.468316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-07-25 01:29:18.468871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-07-25 01:29:18.468902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-07-25 01:29:18.469449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-07-25 01:29:18.469482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-07-25 01:29:18.469876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-07-25 01:29:18.469906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-07-25 01:29:18.470455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-07-25 01:29:18.470487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-07-25 01:29:18.471068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-07-25 01:29:18.471101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-07-25 01:29:18.471680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-07-25 01:29:18.471711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.011 [2024-07-25 01:29:18.472283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.011 [2024-07-25 01:29:18.472315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.011 qpair failed and we were unable to recover it. 00:28:56.012 [2024-07-25 01:29:18.472850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.012 [2024-07-25 01:29:18.472882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.012 qpair failed and we were unable to recover it. 00:28:56.012 [2024-07-25 01:29:18.473334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.012 [2024-07-25 01:29:18.473366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.012 qpair failed and we were unable to recover it. 00:28:56.012 [2024-07-25 01:29:18.473870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.012 [2024-07-25 01:29:18.473887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.012 qpair failed and we were unable to recover it. 00:28:56.012 [2024-07-25 01:29:18.474323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.012 [2024-07-25 01:29:18.474339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.012 qpair failed and we were unable to recover it. 00:28:56.012 [2024-07-25 01:29:18.474823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.012 [2024-07-25 01:29:18.474855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.012 qpair failed and we were unable to recover it. 00:28:56.012 [2024-07-25 01:29:18.475373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.012 [2024-07-25 01:29:18.475406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.012 qpair failed and we were unable to recover it. 00:28:56.012 [2024-07-25 01:29:18.475935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.012 [2024-07-25 01:29:18.475967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.012 qpair failed and we were unable to recover it. 00:28:56.012 [2024-07-25 01:29:18.476462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.012 [2024-07-25 01:29:18.476494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.012 qpair failed and we were unable to recover it. 00:28:56.012 [2024-07-25 01:29:18.476956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.012 [2024-07-25 01:29:18.476987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.012 qpair failed and we were unable to recover it. 00:28:56.012 [2024-07-25 01:29:18.477467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.012 [2024-07-25 01:29:18.477500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.012 qpair failed and we were unable to recover it. 00:28:56.012 [2024-07-25 01:29:18.478038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.012 [2024-07-25 01:29:18.478083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.012 qpair failed and we were unable to recover it. 00:28:56.012 [2024-07-25 01:29:18.478638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.012 [2024-07-25 01:29:18.478655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.012 qpair failed and we were unable to recover it. 00:28:56.012 [2024-07-25 01:29:18.479069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.012 [2024-07-25 01:29:18.479086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.012 qpair failed and we were unable to recover it. 00:28:56.280 [2024-07-25 01:29:18.479510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.280 [2024-07-25 01:29:18.479543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.280 qpair failed and we were unable to recover it. 00:28:56.280 [2024-07-25 01:29:18.479990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.280 [2024-07-25 01:29:18.480023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.280 qpair failed and we were unable to recover it. 00:28:56.280 [2024-07-25 01:29:18.480527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.280 [2024-07-25 01:29:18.480561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.280 qpair failed and we were unable to recover it. 00:28:56.280 [2024-07-25 01:29:18.481090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.280 [2024-07-25 01:29:18.481123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.280 qpair failed and we were unable to recover it. 00:28:56.280 [2024-07-25 01:29:18.481671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.280 [2024-07-25 01:29:18.481703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.280 qpair failed and we were unable to recover it. 00:28:56.280 [2024-07-25 01:29:18.482286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.280 [2024-07-25 01:29:18.482319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.280 qpair failed and we were unable to recover it. 00:28:56.280 [2024-07-25 01:29:18.482907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.280 [2024-07-25 01:29:18.482940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.280 qpair failed and we were unable to recover it. 00:28:56.280 [2024-07-25 01:29:18.483522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.280 [2024-07-25 01:29:18.483555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.280 qpair failed and we were unable to recover it. 00:28:56.280 [2024-07-25 01:29:18.484101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.280 [2024-07-25 01:29:18.484134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.280 qpair failed and we were unable to recover it. 00:28:56.280 [2024-07-25 01:29:18.484725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.280 [2024-07-25 01:29:18.484757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.280 qpair failed and we were unable to recover it. 00:28:56.280 [2024-07-25 01:29:18.485331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.280 [2024-07-25 01:29:18.485365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.280 qpair failed and we were unable to recover it. 00:28:56.280 [2024-07-25 01:29:18.485879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.281 [2024-07-25 01:29:18.485911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.281 qpair failed and we were unable to recover it. 00:28:56.281 [2024-07-25 01:29:18.486363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.281 [2024-07-25 01:29:18.486396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.281 qpair failed and we were unable to recover it. 00:28:56.281 [2024-07-25 01:29:18.486932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.281 [2024-07-25 01:29:18.486964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.281 qpair failed and we were unable to recover it. 00:28:56.281 [2024-07-25 01:29:18.487546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.281 [2024-07-25 01:29:18.487579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.281 qpair failed and we were unable to recover it. 00:28:56.281 [2024-07-25 01:29:18.488144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.281 [2024-07-25 01:29:18.488176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.281 qpair failed and we were unable to recover it. 00:28:56.281 [2024-07-25 01:29:18.488722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.281 [2024-07-25 01:29:18.488754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.281 qpair failed and we were unable to recover it. 00:28:56.281 [2024-07-25 01:29:18.489334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.281 [2024-07-25 01:29:18.489367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.281 qpair failed and we were unable to recover it. 00:28:56.281 [2024-07-25 01:29:18.489952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.281 [2024-07-25 01:29:18.489984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.281 qpair failed and we were unable to recover it. 00:28:56.281 [2024-07-25 01:29:18.490577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.281 [2024-07-25 01:29:18.490616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.281 qpair failed and we were unable to recover it. 00:28:56.281 [2024-07-25 01:29:18.491126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.281 [2024-07-25 01:29:18.491159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.281 qpair failed and we were unable to recover it. 00:28:56.281 [2024-07-25 01:29:18.491676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.281 [2024-07-25 01:29:18.491708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.281 qpair failed and we were unable to recover it. 00:28:56.281 [2024-07-25 01:29:18.492261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.281 [2024-07-25 01:29:18.492293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.281 qpair failed and we were unable to recover it. 00:28:56.281 [2024-07-25 01:29:18.492804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.281 [2024-07-25 01:29:18.492836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.281 qpair failed and we were unable to recover it. 00:28:56.281 [2024-07-25 01:29:18.493356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.281 [2024-07-25 01:29:18.493390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.281 qpair failed and we were unable to recover it. 00:28:56.281 [2024-07-25 01:29:18.493950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.281 [2024-07-25 01:29:18.493982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.281 qpair failed and we were unable to recover it. 00:28:56.281 [2024-07-25 01:29:18.494523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.281 [2024-07-25 01:29:18.494557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.281 qpair failed and we were unable to recover it. 00:28:56.281 [2024-07-25 01:29:18.495152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.281 [2024-07-25 01:29:18.495186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.281 qpair failed and we were unable to recover it. 00:28:56.281 [2024-07-25 01:29:18.495763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.281 [2024-07-25 01:29:18.495794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.281 qpair failed and we were unable to recover it. 00:28:56.281 [2024-07-25 01:29:18.496335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.281 [2024-07-25 01:29:18.496369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.281 qpair failed and we were unable to recover it. 00:28:56.281 [2024-07-25 01:29:18.496923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.281 [2024-07-25 01:29:18.496955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.281 qpair failed and we were unable to recover it. 00:28:56.281 [2024-07-25 01:29:18.497345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.281 [2024-07-25 01:29:18.497379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.281 qpair failed and we were unable to recover it. 00:28:56.281 [2024-07-25 01:29:18.497924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.281 [2024-07-25 01:29:18.497940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.281 qpair failed and we were unable to recover it. 00:28:56.281 [2024-07-25 01:29:18.498329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.281 [2024-07-25 01:29:18.498362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.281 qpair failed and we were unable to recover it. 00:28:56.281 [2024-07-25 01:29:18.498897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.281 [2024-07-25 01:29:18.498928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.281 qpair failed and we were unable to recover it. 00:28:56.281 [2024-07-25 01:29:18.499426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.281 [2024-07-25 01:29:18.499459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.281 qpair failed and we were unable to recover it. 00:28:56.281 [2024-07-25 01:29:18.500005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.281 [2024-07-25 01:29:18.500037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.281 qpair failed and we were unable to recover it. 00:28:56.281 [2024-07-25 01:29:18.500608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.281 [2024-07-25 01:29:18.500641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.281 qpair failed and we were unable to recover it. 00:28:56.281 [2024-07-25 01:29:18.501078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.281 [2024-07-25 01:29:18.501112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.281 qpair failed and we were unable to recover it. 00:28:56.281 [2024-07-25 01:29:18.501573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.281 [2024-07-25 01:29:18.501604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.281 qpair failed and we were unable to recover it. 00:28:56.281 [2024-07-25 01:29:18.502160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.281 [2024-07-25 01:29:18.502177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.281 qpair failed and we were unable to recover it. 00:28:56.281 [2024-07-25 01:29:18.502712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.281 [2024-07-25 01:29:18.502744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.281 qpair failed and we were unable to recover it. 00:28:56.281 [2024-07-25 01:29:18.503280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.281 [2024-07-25 01:29:18.503314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.281 qpair failed and we were unable to recover it. 00:28:56.281 [2024-07-25 01:29:18.503897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.281 [2024-07-25 01:29:18.503929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.281 qpair failed and we were unable to recover it. 00:28:56.281 [2024-07-25 01:29:18.504509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.281 [2024-07-25 01:29:18.504541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.281 qpair failed and we were unable to recover it. 00:28:56.281 [2024-07-25 01:29:18.505038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.281 [2024-07-25 01:29:18.505088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.281 qpair failed and we were unable to recover it. 00:28:56.281 [2024-07-25 01:29:18.505735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.281 [2024-07-25 01:29:18.505751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.281 qpair failed and we were unable to recover it. 00:28:56.281 [2024-07-25 01:29:18.506275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.281 [2024-07-25 01:29:18.506292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.281 qpair failed and we were unable to recover it. 00:28:56.281 [2024-07-25 01:29:18.506774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.282 [2024-07-25 01:29:18.506806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.282 qpair failed and we were unable to recover it. 00:28:56.282 [2024-07-25 01:29:18.507265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.282 [2024-07-25 01:29:18.507298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.282 qpair failed and we were unable to recover it. 00:28:56.282 [2024-07-25 01:29:18.507817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.282 [2024-07-25 01:29:18.507848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.282 qpair failed and we were unable to recover it. 00:28:56.282 [2024-07-25 01:29:18.508405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.282 [2024-07-25 01:29:18.508438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.282 qpair failed and we were unable to recover it. 00:28:56.282 [2024-07-25 01:29:18.508985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.282 [2024-07-25 01:29:18.509015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.282 qpair failed and we were unable to recover it. 00:28:56.282 [2024-07-25 01:29:18.509509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.282 [2024-07-25 01:29:18.509542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.282 qpair failed and we were unable to recover it. 00:28:56.282 [2024-07-25 01:29:18.510089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.282 [2024-07-25 01:29:18.510122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.282 qpair failed and we were unable to recover it. 00:28:56.282 [2024-07-25 01:29:18.510662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.282 [2024-07-25 01:29:18.510694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.282 qpair failed and we were unable to recover it. 00:28:56.282 [2024-07-25 01:29:18.511257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.282 [2024-07-25 01:29:18.511291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.282 qpair failed and we were unable to recover it. 00:28:56.282 [2024-07-25 01:29:18.511848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.282 [2024-07-25 01:29:18.511880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.282 qpair failed and we were unable to recover it. 00:28:56.282 [2024-07-25 01:29:18.512343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.282 [2024-07-25 01:29:18.512376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.282 qpair failed and we were unable to recover it. 00:28:56.282 [2024-07-25 01:29:18.512848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.282 [2024-07-25 01:29:18.512886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.282 qpair failed and we were unable to recover it. 00:28:56.282 [2024-07-25 01:29:18.513403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.282 [2024-07-25 01:29:18.513435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.282 qpair failed and we were unable to recover it. 00:28:56.282 [2024-07-25 01:29:18.513993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.282 [2024-07-25 01:29:18.514025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.282 qpair failed and we were unable to recover it. 00:28:56.282 [2024-07-25 01:29:18.514573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.282 [2024-07-25 01:29:18.514606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.282 qpair failed and we were unable to recover it. 00:28:56.282 [2024-07-25 01:29:18.515148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.282 [2024-07-25 01:29:18.515181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.282 qpair failed and we were unable to recover it. 00:28:56.282 [2024-07-25 01:29:18.515763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.282 [2024-07-25 01:29:18.515796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.282 qpair failed and we were unable to recover it. 00:28:56.282 [2024-07-25 01:29:18.516334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.282 [2024-07-25 01:29:18.516368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.282 qpair failed and we were unable to recover it. 00:28:56.282 [2024-07-25 01:29:18.516906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.282 [2024-07-25 01:29:18.516937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.282 qpair failed and we were unable to recover it. 00:28:56.282 [2024-07-25 01:29:18.517502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.282 [2024-07-25 01:29:18.517535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.282 qpair failed and we were unable to recover it. 00:28:56.282 [2024-07-25 01:29:18.518095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.282 [2024-07-25 01:29:18.518128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.282 qpair failed and we were unable to recover it. 00:28:56.282 [2024-07-25 01:29:18.518651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.282 [2024-07-25 01:29:18.518683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.282 qpair failed and we were unable to recover it. 00:28:56.282 [2024-07-25 01:29:18.519224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.282 [2024-07-25 01:29:18.519256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.282 qpair failed and we were unable to recover it. 00:28:56.282 [2024-07-25 01:29:18.519797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.282 [2024-07-25 01:29:18.519829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.282 qpair failed and we were unable to recover it. 00:28:56.282 [2024-07-25 01:29:18.520350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.282 [2024-07-25 01:29:18.520384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.282 qpair failed and we were unable to recover it. 00:28:56.282 [2024-07-25 01:29:18.520956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.282 [2024-07-25 01:29:18.520989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.282 qpair failed and we were unable to recover it. 00:28:56.282 [2024-07-25 01:29:18.521589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.282 [2024-07-25 01:29:18.521622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.282 qpair failed and we were unable to recover it. 00:28:56.282 [2024-07-25 01:29:18.522169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.282 [2024-07-25 01:29:18.522203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.282 qpair failed and we were unable to recover it. 00:28:56.282 [2024-07-25 01:29:18.522761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.282 [2024-07-25 01:29:18.522793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.282 qpair failed and we were unable to recover it. 00:28:56.282 [2024-07-25 01:29:18.523318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.282 [2024-07-25 01:29:18.523352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.282 qpair failed and we were unable to recover it. 00:28:56.282 [2024-07-25 01:29:18.523914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.282 [2024-07-25 01:29:18.523945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.282 qpair failed and we were unable to recover it. 00:28:56.282 [2024-07-25 01:29:18.524496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.282 [2024-07-25 01:29:18.524529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.282 qpair failed and we were unable to recover it. 00:28:56.282 [2024-07-25 01:29:18.525083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.282 [2024-07-25 01:29:18.525116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.282 qpair failed and we were unable to recover it. 00:28:56.282 [2024-07-25 01:29:18.525678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.282 [2024-07-25 01:29:18.525710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.282 qpair failed and we were unable to recover it. 00:28:56.282 [2024-07-25 01:29:18.526259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.282 [2024-07-25 01:29:18.526292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.282 qpair failed and we were unable to recover it. 00:28:56.282 [2024-07-25 01:29:18.526804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.282 [2024-07-25 01:29:18.526835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.282 qpair failed and we were unable to recover it. 00:28:56.282 [2024-07-25 01:29:18.527322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.282 [2024-07-25 01:29:18.527355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.282 qpair failed and we were unable to recover it. 00:28:56.283 [2024-07-25 01:29:18.527915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.283 [2024-07-25 01:29:18.527946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.283 qpair failed and we were unable to recover it. 00:28:56.283 [2024-07-25 01:29:18.528457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.283 [2024-07-25 01:29:18.528491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.283 qpair failed and we were unable to recover it. 00:28:56.283 [2024-07-25 01:29:18.529063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.283 [2024-07-25 01:29:18.529096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.283 qpair failed and we were unable to recover it. 00:28:56.283 [2024-07-25 01:29:18.529676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.283 [2024-07-25 01:29:18.529707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.283 qpair failed and we were unable to recover it. 00:28:56.283 [2024-07-25 01:29:18.530268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.283 [2024-07-25 01:29:18.530302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.283 qpair failed and we were unable to recover it. 00:28:56.283 [2024-07-25 01:29:18.530877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.283 [2024-07-25 01:29:18.530908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.283 qpair failed and we were unable to recover it. 00:28:56.283 [2024-07-25 01:29:18.531488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.283 [2024-07-25 01:29:18.531522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.283 qpair failed and we were unable to recover it. 00:28:56.283 [2024-07-25 01:29:18.532060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.283 [2024-07-25 01:29:18.532092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.283 qpair failed and we were unable to recover it. 00:28:56.283 [2024-07-25 01:29:18.532653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.283 [2024-07-25 01:29:18.532685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.283 qpair failed and we were unable to recover it. 00:28:56.283 [2024-07-25 01:29:18.533095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.283 [2024-07-25 01:29:18.533128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.283 qpair failed and we were unable to recover it. 00:28:56.283 [2024-07-25 01:29:18.533613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.283 [2024-07-25 01:29:18.533645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.283 qpair failed and we were unable to recover it. 00:28:56.283 [2024-07-25 01:29:18.534179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.283 [2024-07-25 01:29:18.534211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.283 qpair failed and we were unable to recover it. 00:28:56.283 [2024-07-25 01:29:18.534726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.283 [2024-07-25 01:29:18.534768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.283 qpair failed and we were unable to recover it. 00:28:56.283 [2024-07-25 01:29:18.535264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.283 [2024-07-25 01:29:18.535281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.283 qpair failed and we were unable to recover it. 00:28:56.283 [2024-07-25 01:29:18.535707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.283 [2024-07-25 01:29:18.535743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.283 qpair failed and we were unable to recover it. 00:28:56.283 [2024-07-25 01:29:18.536291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.283 [2024-07-25 01:29:18.536324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.283 qpair failed and we were unable to recover it. 00:28:56.283 [2024-07-25 01:29:18.536901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.283 [2024-07-25 01:29:18.536933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.283 qpair failed and we were unable to recover it. 00:28:56.283 [2024-07-25 01:29:18.537429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.283 [2024-07-25 01:29:18.537462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.283 qpair failed and we were unable to recover it. 00:28:56.283 [2024-07-25 01:29:18.538000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.283 [2024-07-25 01:29:18.538032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.283 qpair failed and we were unable to recover it. 00:28:56.283 [2024-07-25 01:29:18.538615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.283 [2024-07-25 01:29:18.538647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.283 qpair failed and we were unable to recover it. 00:28:56.283 [2024-07-25 01:29:18.539214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.283 [2024-07-25 01:29:18.539248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.283 qpair failed and we were unable to recover it. 00:28:56.283 [2024-07-25 01:29:18.539729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.283 [2024-07-25 01:29:18.539761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.283 qpair failed and we were unable to recover it. 00:28:56.283 [2024-07-25 01:29:18.540275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.283 [2024-07-25 01:29:18.540308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.283 qpair failed and we were unable to recover it. 00:28:56.283 [2024-07-25 01:29:18.540844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.283 [2024-07-25 01:29:18.540876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.283 qpair failed and we were unable to recover it. 00:28:56.283 [2024-07-25 01:29:18.541460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.283 [2024-07-25 01:29:18.541493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.283 qpair failed and we were unable to recover it. 00:28:56.283 [2024-07-25 01:29:18.542086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.283 [2024-07-25 01:29:18.542120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.283 qpair failed and we were unable to recover it. 00:28:56.283 [2024-07-25 01:29:18.542699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.283 [2024-07-25 01:29:18.542731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.283 qpair failed and we were unable to recover it. 00:28:56.283 [2024-07-25 01:29:18.543189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.283 [2024-07-25 01:29:18.543222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.283 qpair failed and we were unable to recover it. 00:28:56.283 [2024-07-25 01:29:18.543789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.283 [2024-07-25 01:29:18.543822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.283 qpair failed and we were unable to recover it. 00:28:56.283 [2024-07-25 01:29:18.544393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.283 [2024-07-25 01:29:18.544426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.283 qpair failed and we were unable to recover it. 00:28:56.283 [2024-07-25 01:29:18.545014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.283 [2024-07-25 01:29:18.545064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.283 qpair failed and we were unable to recover it. 00:28:56.283 [2024-07-25 01:29:18.545538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.283 [2024-07-25 01:29:18.545581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.283 qpair failed and we were unable to recover it. 00:28:56.283 [2024-07-25 01:29:18.546093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.283 [2024-07-25 01:29:18.546110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.283 qpair failed and we were unable to recover it. 00:28:56.283 [2024-07-25 01:29:18.546653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.283 [2024-07-25 01:29:18.546685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.283 qpair failed and we were unable to recover it. 00:28:56.283 [2024-07-25 01:29:18.547251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.283 [2024-07-25 01:29:18.547284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.283 qpair failed and we were unable to recover it. 00:28:56.283 [2024-07-25 01:29:18.547840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.283 [2024-07-25 01:29:18.547872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.283 qpair failed and we were unable to recover it. 00:28:56.283 [2024-07-25 01:29:18.548334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.283 [2024-07-25 01:29:18.548367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.283 qpair failed and we were unable to recover it. 00:28:56.283 [2024-07-25 01:29:18.548924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.284 [2024-07-25 01:29:18.548955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.284 qpair failed and we were unable to recover it. 00:28:56.284 [2024-07-25 01:29:18.549522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.284 [2024-07-25 01:29:18.549555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.284 qpair failed and we were unable to recover it. 00:28:56.284 [2024-07-25 01:29:18.550116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.284 [2024-07-25 01:29:18.550150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.284 qpair failed and we were unable to recover it. 00:28:56.284 [2024-07-25 01:29:18.550614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.284 [2024-07-25 01:29:18.550646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.284 qpair failed and we were unable to recover it. 00:28:56.284 [2024-07-25 01:29:18.551189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.284 [2024-07-25 01:29:18.551222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.284 qpair failed and we were unable to recover it. 00:28:56.284 [2024-07-25 01:29:18.551801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.284 [2024-07-25 01:29:18.551833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.284 qpair failed and we were unable to recover it. 00:28:56.284 [2024-07-25 01:29:18.552421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.284 [2024-07-25 01:29:18.552454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.284 qpair failed and we were unable to recover it. 00:28:56.284 [2024-07-25 01:29:18.552908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.284 [2024-07-25 01:29:18.552939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.284 qpair failed and we were unable to recover it. 00:28:56.284 [2024-07-25 01:29:18.553465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.284 [2024-07-25 01:29:18.553498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.284 qpair failed and we were unable to recover it. 00:28:56.284 [2024-07-25 01:29:18.553986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.284 [2024-07-25 01:29:18.554017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.284 qpair failed and we were unable to recover it. 00:28:56.284 [2024-07-25 01:29:18.554563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.284 [2024-07-25 01:29:18.554596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.284 qpair failed and we were unable to recover it. 00:28:56.284 [2024-07-25 01:29:18.555168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.284 [2024-07-25 01:29:18.555202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.284 qpair failed and we were unable to recover it. 00:28:56.284 [2024-07-25 01:29:18.555695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.284 [2024-07-25 01:29:18.555727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.284 qpair failed and we were unable to recover it. 00:28:56.284 [2024-07-25 01:29:18.556274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.284 [2024-07-25 01:29:18.556307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.284 qpair failed and we were unable to recover it. 00:28:56.284 [2024-07-25 01:29:18.556825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.284 [2024-07-25 01:29:18.556857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.284 qpair failed and we were unable to recover it. 00:28:56.284 [2024-07-25 01:29:18.557420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.284 [2024-07-25 01:29:18.557454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.284 qpair failed and we were unable to recover it. 00:28:56.284 [2024-07-25 01:29:18.558011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.284 [2024-07-25 01:29:18.558053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.284 qpair failed and we were unable to recover it. 00:28:56.284 [2024-07-25 01:29:18.558554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.284 [2024-07-25 01:29:18.558591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.284 qpair failed and we were unable to recover it. 00:28:56.284 [2024-07-25 01:29:18.559155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.284 [2024-07-25 01:29:18.559188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.284 qpair failed and we were unable to recover it. 00:28:56.284 [2024-07-25 01:29:18.559762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.284 [2024-07-25 01:29:18.559795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.284 qpair failed and we were unable to recover it. 00:28:56.284 [2024-07-25 01:29:18.560345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.284 [2024-07-25 01:29:18.560378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.284 qpair failed and we were unable to recover it. 00:28:56.284 [2024-07-25 01:29:18.560841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.284 [2024-07-25 01:29:18.560885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.284 qpair failed and we were unable to recover it. 00:28:56.284 [2024-07-25 01:29:18.561417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.284 [2024-07-25 01:29:18.561451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.284 qpair failed and we were unable to recover it. 00:28:56.284 [2024-07-25 01:29:18.561989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.284 [2024-07-25 01:29:18.562021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.284 qpair failed and we were unable to recover it. 00:28:56.284 [2024-07-25 01:29:18.562607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.284 [2024-07-25 01:29:18.562640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.284 qpair failed and we were unable to recover it. 00:28:56.284 [2024-07-25 01:29:18.563123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.284 [2024-07-25 01:29:18.563157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.284 qpair failed and we were unable to recover it. 00:28:56.284 [2024-07-25 01:29:18.563689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.284 [2024-07-25 01:29:18.563721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.284 qpair failed and we were unable to recover it. 00:28:56.284 [2024-07-25 01:29:18.564282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.284 [2024-07-25 01:29:18.564316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.284 qpair failed and we were unable to recover it. 00:28:56.284 [2024-07-25 01:29:18.564821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.284 [2024-07-25 01:29:18.564853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.284 qpair failed and we were unable to recover it. 00:28:56.284 [2024-07-25 01:29:18.565404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.284 [2024-07-25 01:29:18.565436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.284 qpair failed and we were unable to recover it. 00:28:56.284 [2024-07-25 01:29:18.565947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.284 [2024-07-25 01:29:18.565980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.284 qpair failed and we were unable to recover it. 00:28:56.284 [2024-07-25 01:29:18.566533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.284 [2024-07-25 01:29:18.566566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.284 qpair failed and we were unable to recover it. 00:28:56.284 [2024-07-25 01:29:18.567104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.284 [2024-07-25 01:29:18.567138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.284 qpair failed and we were unable to recover it. 00:28:56.284 [2024-07-25 01:29:18.567713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.284 [2024-07-25 01:29:18.567745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.284 qpair failed and we were unable to recover it. 00:28:56.284 [2024-07-25 01:29:18.568328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.285 [2024-07-25 01:29:18.568361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.285 qpair failed and we were unable to recover it. 00:28:56.285 [2024-07-25 01:29:18.568943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.285 [2024-07-25 01:29:18.568974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.285 qpair failed and we were unable to recover it. 00:28:56.285 [2024-07-25 01:29:18.569558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.285 [2024-07-25 01:29:18.569591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.285 qpair failed and we were unable to recover it. 00:28:56.285 [2024-07-25 01:29:18.570180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.285 [2024-07-25 01:29:18.570213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.285 qpair failed and we were unable to recover it. 00:28:56.285 [2024-07-25 01:29:18.570794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.285 [2024-07-25 01:29:18.570825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.285 qpair failed and we were unable to recover it. 00:28:56.285 [2024-07-25 01:29:18.571386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.285 [2024-07-25 01:29:18.571420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.285 qpair failed and we were unable to recover it. 00:28:56.285 [2024-07-25 01:29:18.571983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.285 [2024-07-25 01:29:18.572014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.285 qpair failed and we were unable to recover it. 00:28:56.285 [2024-07-25 01:29:18.572551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.285 [2024-07-25 01:29:18.572585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.285 qpair failed and we were unable to recover it. 00:28:56.285 [2024-07-25 01:29:18.573146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.285 [2024-07-25 01:29:18.573178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.285 qpair failed and we were unable to recover it. 00:28:56.285 [2024-07-25 01:29:18.573701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.285 [2024-07-25 01:29:18.573733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.285 qpair failed and we were unable to recover it. 00:28:56.285 [2024-07-25 01:29:18.574202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.285 [2024-07-25 01:29:18.574236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.285 qpair failed and we were unable to recover it. 00:28:56.285 [2024-07-25 01:29:18.574740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.285 [2024-07-25 01:29:18.574772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.285 qpair failed and we were unable to recover it. 00:28:56.285 [2024-07-25 01:29:18.575318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.285 [2024-07-25 01:29:18.575352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.285 qpair failed and we were unable to recover it. 00:28:56.285 [2024-07-25 01:29:18.575818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.285 [2024-07-25 01:29:18.575856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.285 qpair failed and we were unable to recover it. 00:28:56.285 [2024-07-25 01:29:18.576385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.285 [2024-07-25 01:29:18.576418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.285 qpair failed and we were unable to recover it. 00:28:56.285 [2024-07-25 01:29:18.576956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.285 [2024-07-25 01:29:18.576988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.285 qpair failed and we were unable to recover it. 00:28:56.285 [2024-07-25 01:29:18.577490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.285 [2024-07-25 01:29:18.577524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.285 qpair failed and we were unable to recover it. 00:28:56.285 [2024-07-25 01:29:18.578068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.285 [2024-07-25 01:29:18.578101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.285 qpair failed and we were unable to recover it. 00:28:56.285 [2024-07-25 01:29:18.578693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.285 [2024-07-25 01:29:18.578725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.285 qpair failed and we were unable to recover it. 00:28:56.285 [2024-07-25 01:29:18.579266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.285 [2024-07-25 01:29:18.579304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.285 qpair failed and we were unable to recover it. 00:28:56.285 [2024-07-25 01:29:18.579863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.285 [2024-07-25 01:29:18.579894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.285 qpair failed and we were unable to recover it. 00:28:56.285 [2024-07-25 01:29:18.580460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.285 [2024-07-25 01:29:18.580493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.285 qpair failed and we were unable to recover it. 00:28:56.285 [2024-07-25 01:29:18.580984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.285 [2024-07-25 01:29:18.581017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.285 qpair failed and we were unable to recover it. 00:28:56.285 [2024-07-25 01:29:18.581493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.285 [2024-07-25 01:29:18.581532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.285 qpair failed and we were unable to recover it. 00:28:56.285 [2024-07-25 01:29:18.581992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.285 [2024-07-25 01:29:18.582025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.285 qpair failed and we were unable to recover it. 00:28:56.285 [2024-07-25 01:29:18.582571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.285 [2024-07-25 01:29:18.582604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.285 qpair failed and we were unable to recover it. 00:28:56.285 [2024-07-25 01:29:18.583185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.285 [2024-07-25 01:29:18.583219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.285 qpair failed and we were unable to recover it. 00:28:56.285 [2024-07-25 01:29:18.583813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.285 [2024-07-25 01:29:18.583845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.285 qpair failed and we were unable to recover it. 00:28:56.285 [2024-07-25 01:29:18.584391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.285 [2024-07-25 01:29:18.584428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.285 qpair failed and we were unable to recover it. 00:28:56.285 [2024-07-25 01:29:18.584947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.285 [2024-07-25 01:29:18.584987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.285 qpair failed and we were unable to recover it. 00:28:56.285 [2024-07-25 01:29:18.585493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.285 [2024-07-25 01:29:18.585526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.285 qpair failed and we were unable to recover it. 00:28:56.285 [2024-07-25 01:29:18.586097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.286 [2024-07-25 01:29:18.586131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.286 qpair failed and we were unable to recover it. 00:28:56.286 [2024-07-25 01:29:18.586710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.286 [2024-07-25 01:29:18.586741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.286 qpair failed and we were unable to recover it. 00:28:56.286 [2024-07-25 01:29:18.587232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.286 [2024-07-25 01:29:18.587266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.286 qpair failed and we were unable to recover it. 00:28:56.286 [2024-07-25 01:29:18.587804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.286 [2024-07-25 01:29:18.587837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.286 qpair failed and we were unable to recover it. 00:28:56.286 [2024-07-25 01:29:18.588341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.286 [2024-07-25 01:29:18.588358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.286 qpair failed and we were unable to recover it. 00:28:56.286 [2024-07-25 01:29:18.588863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.286 [2024-07-25 01:29:18.588896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.286 qpair failed and we were unable to recover it. 00:28:56.286 [2024-07-25 01:29:18.589492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.286 [2024-07-25 01:29:18.589525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.286 qpair failed and we were unable to recover it. 00:28:56.286 [2024-07-25 01:29:18.589977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.286 [2024-07-25 01:29:18.590008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.286 qpair failed and we were unable to recover it. 00:28:56.286 [2024-07-25 01:29:18.590574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.286 [2024-07-25 01:29:18.590606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.286 qpair failed and we were unable to recover it. 00:28:56.286 [2024-07-25 01:29:18.591146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.286 [2024-07-25 01:29:18.591179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.286 qpair failed and we were unable to recover it. 00:28:56.286 [2024-07-25 01:29:18.591687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.286 [2024-07-25 01:29:18.591720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.286 qpair failed and we were unable to recover it. 00:28:56.286 [2024-07-25 01:29:18.592234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.286 [2024-07-25 01:29:18.592268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.286 qpair failed and we were unable to recover it. 00:28:56.286 [2024-07-25 01:29:18.592800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.286 [2024-07-25 01:29:18.592832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.286 qpair failed and we were unable to recover it. 00:28:56.286 [2024-07-25 01:29:18.593372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.286 [2024-07-25 01:29:18.593406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.286 qpair failed and we were unable to recover it. 00:28:56.286 [2024-07-25 01:29:18.593982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.286 [2024-07-25 01:29:18.594016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.286 qpair failed and we were unable to recover it. 00:28:56.286 [2024-07-25 01:29:18.594584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.286 [2024-07-25 01:29:18.594617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.286 qpair failed and we were unable to recover it. 00:28:56.286 [2024-07-25 01:29:18.595166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.286 [2024-07-25 01:29:18.595200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.286 qpair failed and we were unable to recover it. 00:28:56.286 [2024-07-25 01:29:18.595741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.286 [2024-07-25 01:29:18.595774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.286 qpair failed and we were unable to recover it. 00:28:56.286 [2024-07-25 01:29:18.596267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.286 [2024-07-25 01:29:18.596301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.286 qpair failed and we were unable to recover it. 00:28:56.286 [2024-07-25 01:29:18.596852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.286 [2024-07-25 01:29:18.596885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.286 qpair failed and we were unable to recover it. 00:28:56.286 [2024-07-25 01:29:18.597446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.286 [2024-07-25 01:29:18.597479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.286 qpair failed and we were unable to recover it. 00:28:56.286 [2024-07-25 01:29:18.598037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.286 [2024-07-25 01:29:18.598078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.286 qpair failed and we were unable to recover it. 00:28:56.286 [2024-07-25 01:29:18.598630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.286 [2024-07-25 01:29:18.598662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.286 qpair failed and we were unable to recover it. 00:28:56.286 [2024-07-25 01:29:18.599128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.286 [2024-07-25 01:29:18.599162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.286 qpair failed and we were unable to recover it. 00:28:56.286 [2024-07-25 01:29:18.599624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.286 [2024-07-25 01:29:18.599656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.286 qpair failed and we were unable to recover it. 00:28:56.286 [2024-07-25 01:29:18.600108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.286 [2024-07-25 01:29:18.600142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.286 qpair failed and we were unable to recover it. 00:28:56.286 [2024-07-25 01:29:18.600704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.286 [2024-07-25 01:29:18.600736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.286 qpair failed and we were unable to recover it. 00:28:56.286 [2024-07-25 01:29:18.601211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.286 [2024-07-25 01:29:18.601245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.286 qpair failed and we were unable to recover it. 00:28:56.286 [2024-07-25 01:29:18.601767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.286 [2024-07-25 01:29:18.601798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.286 qpair failed and we were unable to recover it. 00:28:56.286 [2024-07-25 01:29:18.602362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.286 [2024-07-25 01:29:18.602396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.286 qpair failed and we were unable to recover it. 00:28:56.286 [2024-07-25 01:29:18.602931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.286 [2024-07-25 01:29:18.602963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.286 qpair failed and we were unable to recover it. 00:28:56.286 [2024-07-25 01:29:18.603474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.286 [2024-07-25 01:29:18.603508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.286 qpair failed and we were unable to recover it. 00:28:56.286 [2024-07-25 01:29:18.603996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.286 [2024-07-25 01:29:18.604035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.286 qpair failed and we were unable to recover it. 00:28:56.286 [2024-07-25 01:29:18.604505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.286 [2024-07-25 01:29:18.604537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.286 qpair failed and we were unable to recover it. 00:28:56.286 [2024-07-25 01:29:18.605070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.286 [2024-07-25 01:29:18.605104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.286 qpair failed and we were unable to recover it. 00:28:56.286 [2024-07-25 01:29:18.605641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.286 [2024-07-25 01:29:18.605674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.286 qpair failed and we were unable to recover it. 00:28:56.286 [2024-07-25 01:29:18.606156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.286 [2024-07-25 01:29:18.606190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.287 qpair failed and we were unable to recover it. 00:28:56.287 [2024-07-25 01:29:18.606727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.287 [2024-07-25 01:29:18.606759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.287 qpair failed and we were unable to recover it. 00:28:56.287 [2024-07-25 01:29:18.607346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.287 [2024-07-25 01:29:18.607379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.287 qpair failed and we were unable to recover it. 00:28:56.287 [2024-07-25 01:29:18.607786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.287 [2024-07-25 01:29:18.607818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.287 qpair failed and we were unable to recover it. 00:28:56.287 [2024-07-25 01:29:18.608363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.287 [2024-07-25 01:29:18.608396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.287 qpair failed and we were unable to recover it. 00:28:56.287 [2024-07-25 01:29:18.608886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.287 [2024-07-25 01:29:18.608918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.287 qpair failed and we were unable to recover it. 00:28:56.287 [2024-07-25 01:29:18.609396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.287 [2024-07-25 01:29:18.609429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.287 qpair failed and we were unable to recover it. 00:28:56.287 [2024-07-25 01:29:18.609892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.287 [2024-07-25 01:29:18.609924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.287 qpair failed and we were unable to recover it. 00:28:56.287 [2024-07-25 01:29:18.610444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.287 [2024-07-25 01:29:18.610477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.287 qpair failed and we were unable to recover it. 00:28:56.287 [2024-07-25 01:29:18.610950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.287 [2024-07-25 01:29:18.610982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.287 qpair failed and we were unable to recover it. 00:28:56.287 [2024-07-25 01:29:18.611564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.287 [2024-07-25 01:29:18.611599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.287 qpair failed and we were unable to recover it. 00:28:56.287 [2024-07-25 01:29:18.612113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.287 [2024-07-25 01:29:18.612147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.287 qpair failed and we were unable to recover it. 00:28:56.287 [2024-07-25 01:29:18.612712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.287 [2024-07-25 01:29:18.612745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.287 qpair failed and we were unable to recover it. 00:28:56.287 [2024-07-25 01:29:18.613316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.287 [2024-07-25 01:29:18.613349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.287 qpair failed and we were unable to recover it. 00:28:56.287 [2024-07-25 01:29:18.613932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.287 [2024-07-25 01:29:18.613965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.287 qpair failed and we were unable to recover it. 00:28:56.287 [2024-07-25 01:29:18.614485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.287 [2024-07-25 01:29:18.614519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.287 qpair failed and we were unable to recover it. 00:28:56.287 [2024-07-25 01:29:18.614982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.287 [2024-07-25 01:29:18.615015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.287 qpair failed and we were unable to recover it. 00:28:56.287 [2024-07-25 01:29:18.615570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.287 [2024-07-25 01:29:18.615602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.287 qpair failed and we were unable to recover it. 00:28:56.287 [2024-07-25 01:29:18.616185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.287 [2024-07-25 01:29:18.616218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.287 qpair failed and we were unable to recover it. 00:28:56.287 [2024-07-25 01:29:18.616801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.287 [2024-07-25 01:29:18.616833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.287 qpair failed and we were unable to recover it. 00:28:56.287 [2024-07-25 01:29:18.617420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.287 [2024-07-25 01:29:18.617455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.287 qpair failed and we were unable to recover it. 00:28:56.287 [2024-07-25 01:29:18.618034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.287 [2024-07-25 01:29:18.618075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.287 qpair failed and we were unable to recover it. 00:28:56.287 [2024-07-25 01:29:18.618662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.287 [2024-07-25 01:29:18.618694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.287 qpair failed and we were unable to recover it. 00:28:56.287 [2024-07-25 01:29:18.619261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.287 [2024-07-25 01:29:18.619295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.287 qpair failed and we were unable to recover it. 00:28:56.287 [2024-07-25 01:29:18.619777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.287 [2024-07-25 01:29:18.619808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.287 qpair failed and we were unable to recover it. 00:28:56.287 [2024-07-25 01:29:18.620242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.287 [2024-07-25 01:29:18.620259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.287 qpair failed and we were unable to recover it. 00:28:56.287 [2024-07-25 01:29:18.620779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.287 [2024-07-25 01:29:18.620811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.287 qpair failed and we were unable to recover it. 00:28:56.287 [2024-07-25 01:29:18.621718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.287 [2024-07-25 01:29:18.621802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.287 qpair failed and we were unable to recover it. 00:28:56.287 [2024-07-25 01:29:18.622428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.287 [2024-07-25 01:29:18.622470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.287 qpair failed and we were unable to recover it. 00:28:56.287 [2024-07-25 01:29:18.623032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.287 [2024-07-25 01:29:18.623057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.287 qpair failed and we were unable to recover it. 00:28:56.287 [2024-07-25 01:29:18.623554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.287 [2024-07-25 01:29:18.623570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.287 qpair failed and we were unable to recover it. 00:28:56.287 [2024-07-25 01:29:18.624089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.287 [2024-07-25 01:29:18.624123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.287 qpair failed and we were unable to recover it. 00:28:56.287 [2024-07-25 01:29:18.624662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.287 [2024-07-25 01:29:18.624695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.287 qpair failed and we were unable to recover it. 00:28:56.287 [2024-07-25 01:29:18.625223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.287 [2024-07-25 01:29:18.625240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.287 qpair failed and we were unable to recover it. 00:28:56.287 [2024-07-25 01:29:18.625670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.287 [2024-07-25 01:29:18.625686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.287 qpair failed and we were unable to recover it. 00:28:56.287 [2024-07-25 01:29:18.626217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.287 [2024-07-25 01:29:18.626250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.287 qpair failed and we were unable to recover it. 00:28:56.287 [2024-07-25 01:29:18.626664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.287 [2024-07-25 01:29:18.626706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.287 qpair failed and we were unable to recover it. 00:28:56.287 [2024-07-25 01:29:18.627219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.288 [2024-07-25 01:29:18.627236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.288 qpair failed and we were unable to recover it. 00:28:56.288 [2024-07-25 01:29:18.627802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.288 [2024-07-25 01:29:18.627835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.288 qpair failed and we were unable to recover it. 00:28:56.288 [2024-07-25 01:29:18.628392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.288 [2024-07-25 01:29:18.628425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.288 qpair failed and we were unable to recover it. 00:28:56.288 [2024-07-25 01:29:18.628965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.288 [2024-07-25 01:29:18.628982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.288 qpair failed and we were unable to recover it. 00:28:56.288 [2024-07-25 01:29:18.629469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.288 [2024-07-25 01:29:18.629504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.288 qpair failed and we were unable to recover it. 00:28:56.288 [2024-07-25 01:29:18.630036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.288 [2024-07-25 01:29:18.630070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.288 qpair failed and we were unable to recover it. 00:28:56.288 [2024-07-25 01:29:18.630580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.288 [2024-07-25 01:29:18.630597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.288 qpair failed and we were unable to recover it. 00:28:56.288 [2024-07-25 01:29:18.631128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.288 [2024-07-25 01:29:18.631145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.288 qpair failed and we were unable to recover it. 00:28:56.288 [2024-07-25 01:29:18.631643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.288 [2024-07-25 01:29:18.631660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.288 qpair failed and we were unable to recover it. 00:28:56.288 [2024-07-25 01:29:18.632190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.288 [2024-07-25 01:29:18.632224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.288 qpair failed and we were unable to recover it. 00:28:56.288 [2024-07-25 01:29:18.632787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.288 [2024-07-25 01:29:18.632819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.288 qpair failed and we were unable to recover it. 00:28:56.288 [2024-07-25 01:29:18.633383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.288 [2024-07-25 01:29:18.633416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.288 qpair failed and we were unable to recover it. 00:28:56.288 [2024-07-25 01:29:18.633899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.288 [2024-07-25 01:29:18.633932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.288 qpair failed and we were unable to recover it. 00:28:56.288 [2024-07-25 01:29:18.634523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.288 [2024-07-25 01:29:18.634557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.288 qpair failed and we were unable to recover it. 00:28:56.288 [2024-07-25 01:29:18.635271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.288 [2024-07-25 01:29:18.635306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.288 qpair failed and we were unable to recover it. 00:28:56.288 [2024-07-25 01:29:18.635839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.288 [2024-07-25 01:29:18.635856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.288 qpair failed and we were unable to recover it. 00:28:56.288 [2024-07-25 01:29:18.636291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.288 [2024-07-25 01:29:18.636324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.288 qpair failed and we were unable to recover it. 00:28:56.288 [2024-07-25 01:29:18.636890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.288 [2024-07-25 01:29:18.636923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.288 qpair failed and we were unable to recover it. 00:28:56.288 [2024-07-25 01:29:18.637439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.288 [2024-07-25 01:29:18.637456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.288 qpair failed and we were unable to recover it. 00:28:56.288 [2024-07-25 01:29:18.637941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.288 [2024-07-25 01:29:18.637973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.288 qpair failed and we were unable to recover it. 00:28:56.288 [2024-07-25 01:29:18.638545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.288 [2024-07-25 01:29:18.638578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.288 qpair failed and we were unable to recover it. 00:28:56.288 [2024-07-25 01:29:18.639090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.288 [2024-07-25 01:29:18.639108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.288 qpair failed and we were unable to recover it. 00:28:56.288 [2024-07-25 01:29:18.639560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.288 [2024-07-25 01:29:18.639576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.288 qpair failed and we were unable to recover it. 00:28:56.288 [2024-07-25 01:29:18.640132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.288 [2024-07-25 01:29:18.640149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.288 qpair failed and we were unable to recover it. 00:28:56.288 [2024-07-25 01:29:18.640705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.288 [2024-07-25 01:29:18.640738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.288 qpair failed and we were unable to recover it. 00:28:56.288 [2024-07-25 01:29:18.641200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.288 [2024-07-25 01:29:18.641233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.288 qpair failed and we were unable to recover it. 00:28:56.288 [2024-07-25 01:29:18.641700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.288 [2024-07-25 01:29:18.641732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.288 qpair failed and we were unable to recover it. 00:28:56.288 [2024-07-25 01:29:18.642274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.288 [2024-07-25 01:29:18.642308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.288 qpair failed and we were unable to recover it. 00:28:56.288 [2024-07-25 01:29:18.642781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.288 [2024-07-25 01:29:18.642813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.288 qpair failed and we were unable to recover it. 00:28:56.288 [2024-07-25 01:29:18.643301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.288 [2024-07-25 01:29:18.643334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.288 qpair failed and we were unable to recover it. 00:28:56.288 [2024-07-25 01:29:18.643956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.288 [2024-07-25 01:29:18.643988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.288 qpair failed and we were unable to recover it. 00:28:56.288 [2024-07-25 01:29:18.644554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.288 [2024-07-25 01:29:18.644587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.288 qpair failed and we were unable to recover it. 00:28:56.288 [2024-07-25 01:29:18.645098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.288 [2024-07-25 01:29:18.645115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.288 qpair failed and we were unable to recover it. 00:28:56.288 [2024-07-25 01:29:18.645628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.288 [2024-07-25 01:29:18.645661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.288 qpair failed and we were unable to recover it. 00:28:56.288 [2024-07-25 01:29:18.646154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.289 [2024-07-25 01:29:18.646172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.289 qpair failed and we were unable to recover it. 00:28:56.289 [2024-07-25 01:29:18.646600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.289 [2024-07-25 01:29:18.646639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.289 qpair failed and we were unable to recover it. 00:28:56.289 [2024-07-25 01:29:18.647105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.289 [2024-07-25 01:29:18.647138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.289 qpair failed and we were unable to recover it. 00:28:56.289 [2024-07-25 01:29:18.647688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.289 [2024-07-25 01:29:18.647721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.289 qpair failed and we were unable to recover it. 00:28:56.289 [2024-07-25 01:29:18.648220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.289 [2024-07-25 01:29:18.648253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.289 qpair failed and we were unable to recover it. 00:28:56.289 [2024-07-25 01:29:18.648663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.289 [2024-07-25 01:29:18.648695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.289 qpair failed and we were unable to recover it. 00:28:56.289 [2024-07-25 01:29:18.649252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.289 [2024-07-25 01:29:18.649286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.289 qpair failed and we were unable to recover it. 00:28:56.289 [2024-07-25 01:29:18.649775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.289 [2024-07-25 01:29:18.649807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.289 qpair failed and we were unable to recover it. 00:28:56.289 [2024-07-25 01:29:18.650285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.289 [2024-07-25 01:29:18.650318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.289 qpair failed and we were unable to recover it. 00:28:56.289 [2024-07-25 01:29:18.650831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.289 [2024-07-25 01:29:18.650863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.289 qpair failed and we were unable to recover it. 00:28:56.289 [2024-07-25 01:29:18.651327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.289 [2024-07-25 01:29:18.651345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.289 qpair failed and we were unable to recover it. 00:28:56.289 [2024-07-25 01:29:18.651790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.289 [2024-07-25 01:29:18.651806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.289 qpair failed and we were unable to recover it. 00:28:56.289 [2024-07-25 01:29:18.652240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.289 [2024-07-25 01:29:18.652272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.289 qpair failed and we were unable to recover it. 00:28:56.289 [2024-07-25 01:29:18.652816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.289 [2024-07-25 01:29:18.652848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.289 qpair failed and we were unable to recover it. 00:28:56.289 [2024-07-25 01:29:18.653435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.289 [2024-07-25 01:29:18.653468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.289 qpair failed and we were unable to recover it. 00:28:56.289 [2024-07-25 01:29:18.654050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.289 [2024-07-25 01:29:18.654068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.289 qpair failed and we were unable to recover it. 00:28:56.289 [2024-07-25 01:29:18.654501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.289 [2024-07-25 01:29:18.654518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.289 qpair failed and we were unable to recover it. 00:28:56.289 [2024-07-25 01:29:18.655036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.289 [2024-07-25 01:29:18.655080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.289 qpair failed and we were unable to recover it. 00:28:56.289 [2024-07-25 01:29:18.655654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.289 [2024-07-25 01:29:18.655686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.289 qpair failed and we were unable to recover it. 00:28:56.289 [2024-07-25 01:29:18.656207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.289 [2024-07-25 01:29:18.656241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.289 qpair failed and we were unable to recover it. 00:28:56.289 [2024-07-25 01:29:18.656782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.289 [2024-07-25 01:29:18.656814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.289 qpair failed and we were unable to recover it. 00:28:56.289 [2024-07-25 01:29:18.657319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.289 [2024-07-25 01:29:18.657336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.289 qpair failed and we were unable to recover it. 00:28:56.289 [2024-07-25 01:29:18.657760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.289 [2024-07-25 01:29:18.657776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.289 qpair failed and we were unable to recover it. 00:28:56.289 [2024-07-25 01:29:18.658214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.289 [2024-07-25 01:29:18.658247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.289 qpair failed and we were unable to recover it. 00:28:56.289 [2024-07-25 01:29:18.658738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.289 [2024-07-25 01:29:18.658770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.289 qpair failed and we were unable to recover it. 00:28:56.289 [2024-07-25 01:29:18.659239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.289 [2024-07-25 01:29:18.659256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.289 qpair failed and we were unable to recover it. 00:28:56.289 [2024-07-25 01:29:18.659676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.289 [2024-07-25 01:29:18.659692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.289 qpair failed and we were unable to recover it. 00:28:56.289 [2024-07-25 01:29:18.660130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.289 [2024-07-25 01:29:18.660147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.289 qpair failed and we were unable to recover it. 00:28:56.289 [2024-07-25 01:29:18.660630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.289 [2024-07-25 01:29:18.660663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.289 qpair failed and we were unable to recover it. 00:28:56.289 [2024-07-25 01:29:18.661368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.289 [2024-07-25 01:29:18.661385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.289 qpair failed and we were unable to recover it. 00:28:56.289 [2024-07-25 01:29:18.661913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.289 [2024-07-25 01:29:18.661929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.289 qpair failed and we were unable to recover it. 00:28:56.289 [2024-07-25 01:29:18.662358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.289 [2024-07-25 01:29:18.662391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.289 qpair failed and we were unable to recover it. 00:28:56.289 [2024-07-25 01:29:18.662953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.289 [2024-07-25 01:29:18.662992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.289 qpair failed and we were unable to recover it. 00:28:56.289 [2024-07-25 01:29:18.663470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.289 [2024-07-25 01:29:18.663503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.289 qpair failed and we were unable to recover it. 00:28:56.289 [2024-07-25 01:29:18.664059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.289 [2024-07-25 01:29:18.664092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.289 qpair failed and we were unable to recover it. 00:28:56.289 [2024-07-25 01:29:18.664660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.289 [2024-07-25 01:29:18.664693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.289 qpair failed and we were unable to recover it. 00:28:56.289 [2024-07-25 01:29:18.665223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.289 [2024-07-25 01:29:18.665256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.289 qpair failed and we were unable to recover it. 00:28:56.290 [2024-07-25 01:29:18.665663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.290 [2024-07-25 01:29:18.665696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.290 qpair failed and we were unable to recover it. 00:28:56.290 [2024-07-25 01:29:18.666229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.290 [2024-07-25 01:29:18.666263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.290 qpair failed and we were unable to recover it. 00:28:56.290 [2024-07-25 01:29:18.666739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.290 [2024-07-25 01:29:18.666771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.290 qpair failed and we were unable to recover it. 00:28:56.290 [2024-07-25 01:29:18.667229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.290 [2024-07-25 01:29:18.667261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.290 qpair failed and we were unable to recover it. 00:28:56.290 [2024-07-25 01:29:18.667757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.290 [2024-07-25 01:29:18.667788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.290 qpair failed and we were unable to recover it. 00:28:56.290 [2024-07-25 01:29:18.668253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.290 [2024-07-25 01:29:18.668270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.290 qpair failed and we were unable to recover it. 00:28:56.290 [2024-07-25 01:29:18.668755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.290 [2024-07-25 01:29:18.668786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.290 qpair failed and we were unable to recover it. 00:28:56.290 [2024-07-25 01:29:18.669331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.290 [2024-07-25 01:29:18.669365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.290 qpair failed and we were unable to recover it. 00:28:56.290 [2024-07-25 01:29:18.669951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.290 [2024-07-25 01:29:18.669984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.290 qpair failed and we were unable to recover it. 00:28:56.290 [2024-07-25 01:29:18.670445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.290 [2024-07-25 01:29:18.670479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.290 qpair failed and we were unable to recover it. 00:28:56.290 [2024-07-25 01:29:18.670942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.290 [2024-07-25 01:29:18.670973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.290 qpair failed and we were unable to recover it. 00:28:56.290 [2024-07-25 01:29:18.671525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.290 [2024-07-25 01:29:18.671558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.290 qpair failed and we were unable to recover it. 00:28:56.290 [2024-07-25 01:29:18.672119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.290 [2024-07-25 01:29:18.672152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.290 qpair failed and we were unable to recover it. 00:28:56.290 [2024-07-25 01:29:18.672696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.290 [2024-07-25 01:29:18.672728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.290 qpair failed and we were unable to recover it. 00:28:56.290 [2024-07-25 01:29:18.673264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.290 [2024-07-25 01:29:18.673298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.290 qpair failed and we were unable to recover it. 00:28:56.290 [2024-07-25 01:29:18.673831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.290 [2024-07-25 01:29:18.673863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.290 qpair failed and we were unable to recover it. 00:28:56.290 [2024-07-25 01:29:18.674322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.290 [2024-07-25 01:29:18.674356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.290 qpair failed and we were unable to recover it. 00:28:56.290 [2024-07-25 01:29:18.674902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.290 [2024-07-25 01:29:18.674935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.290 qpair failed and we were unable to recover it. 00:28:56.290 [2024-07-25 01:29:18.675507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.290 [2024-07-25 01:29:18.675540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.290 qpair failed and we were unable to recover it. 00:28:56.290 [2024-07-25 01:29:18.676125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.290 [2024-07-25 01:29:18.676158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.290 qpair failed and we were unable to recover it. 00:28:56.290 [2024-07-25 01:29:18.676733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.290 [2024-07-25 01:29:18.676766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.290 qpair failed and we were unable to recover it. 00:28:56.290 [2024-07-25 01:29:18.677187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.290 [2024-07-25 01:29:18.677221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.290 qpair failed and we were unable to recover it. 00:28:56.290 [2024-07-25 01:29:18.677765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.290 [2024-07-25 01:29:18.677797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.290 qpair failed and we were unable to recover it. 00:28:56.290 [2024-07-25 01:29:18.678362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.290 [2024-07-25 01:29:18.678395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.290 qpair failed and we were unable to recover it. 00:28:56.290 [2024-07-25 01:29:18.678959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.290 [2024-07-25 01:29:18.678990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.290 qpair failed and we were unable to recover it. 00:28:56.290 [2024-07-25 01:29:18.679575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.290 [2024-07-25 01:29:18.679609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.290 qpair failed and we were unable to recover it. 00:28:56.290 [2024-07-25 01:29:18.680142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.290 [2024-07-25 01:29:18.680159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.290 qpair failed and we were unable to recover it. 00:28:56.290 [2024-07-25 01:29:18.680670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.290 [2024-07-25 01:29:18.680702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.290 qpair failed and we were unable to recover it. 00:28:56.290 [2024-07-25 01:29:18.681264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.290 [2024-07-25 01:29:18.681296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.290 qpair failed and we were unable to recover it. 00:28:56.291 [2024-07-25 01:29:18.681815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.291 [2024-07-25 01:29:18.681848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.291 qpair failed and we were unable to recover it. 00:28:56.291 [2024-07-25 01:29:18.682364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.291 [2024-07-25 01:29:18.682397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.291 qpair failed and we were unable to recover it. 00:28:56.291 [2024-07-25 01:29:18.682851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.291 [2024-07-25 01:29:18.682882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.291 qpair failed and we were unable to recover it. 00:28:56.291 [2024-07-25 01:29:18.683445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.291 [2024-07-25 01:29:18.683478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.291 qpair failed and we were unable to recover it. 00:28:56.291 [2024-07-25 01:29:18.684053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.291 [2024-07-25 01:29:18.684086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.291 qpair failed and we were unable to recover it. 00:28:56.291 [2024-07-25 01:29:18.684656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.291 [2024-07-25 01:29:18.684688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.291 qpair failed and we were unable to recover it. 00:28:56.291 [2024-07-25 01:29:18.685250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.291 [2024-07-25 01:29:18.685289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.291 qpair failed and we were unable to recover it. 00:28:56.291 [2024-07-25 01:29:18.685840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.291 [2024-07-25 01:29:18.685872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.291 qpair failed and we were unable to recover it. 00:28:56.291 [2024-07-25 01:29:18.686441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.291 [2024-07-25 01:29:18.686474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.291 qpair failed and we were unable to recover it. 00:28:56.291 [2024-07-25 01:29:18.687013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.291 [2024-07-25 01:29:18.687054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.291 qpair failed and we were unable to recover it. 00:28:56.291 [2024-07-25 01:29:18.687617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.291 [2024-07-25 01:29:18.687649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.291 qpair failed and we were unable to recover it. 00:28:56.291 [2024-07-25 01:29:18.688100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.291 [2024-07-25 01:29:18.688134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.291 qpair failed and we were unable to recover it. 00:28:56.291 [2024-07-25 01:29:18.688604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.291 [2024-07-25 01:29:18.688638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.291 qpair failed and we were unable to recover it. 00:28:56.291 [2024-07-25 01:29:18.689158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.291 [2024-07-25 01:29:18.689191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.291 qpair failed and we were unable to recover it. 00:28:56.291 [2024-07-25 01:29:18.689732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.291 [2024-07-25 01:29:18.689764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.291 qpair failed and we were unable to recover it. 00:28:56.291 [2024-07-25 01:29:18.690302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.291 [2024-07-25 01:29:18.690335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.291 qpair failed and we were unable to recover it. 00:28:56.291 [2024-07-25 01:29:18.690896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.291 [2024-07-25 01:29:18.690929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.291 qpair failed and we were unable to recover it. 00:28:56.291 [2024-07-25 01:29:18.691496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.291 [2024-07-25 01:29:18.691528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.291 qpair failed and we were unable to recover it. 00:28:56.291 [2024-07-25 01:29:18.692070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.291 [2024-07-25 01:29:18.692103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.291 qpair failed and we were unable to recover it. 00:28:56.291 [2024-07-25 01:29:18.692672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.291 [2024-07-25 01:29:18.692704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.291 qpair failed and we were unable to recover it. 00:28:56.291 [2024-07-25 01:29:18.693272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.291 [2024-07-25 01:29:18.693305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.291 qpair failed and we were unable to recover it. 00:28:56.291 [2024-07-25 01:29:18.693758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.291 [2024-07-25 01:29:18.693789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.291 qpair failed and we were unable to recover it. 00:28:56.291 [2024-07-25 01:29:18.694309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.291 [2024-07-25 01:29:18.694343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.291 qpair failed and we were unable to recover it. 00:28:56.291 [2024-07-25 01:29:18.694817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.291 [2024-07-25 01:29:18.694849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.291 qpair failed and we were unable to recover it. 00:28:56.291 [2024-07-25 01:29:18.695320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.291 [2024-07-25 01:29:18.695353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.291 qpair failed and we were unable to recover it. 00:28:56.291 [2024-07-25 01:29:18.695891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.291 [2024-07-25 01:29:18.695923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.291 qpair failed and we were unable to recover it. 00:28:56.291 [2024-07-25 01:29:18.696425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.291 [2024-07-25 01:29:18.696457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.291 qpair failed and we were unable to recover it. 00:28:56.291 [2024-07-25 01:29:18.696924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.291 [2024-07-25 01:29:18.696956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.291 qpair failed and we were unable to recover it. 00:28:56.291 [2024-07-25 01:29:18.697492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.291 [2024-07-25 01:29:18.697525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.291 qpair failed and we were unable to recover it. 00:28:56.291 [2024-07-25 01:29:18.698008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.291 [2024-07-25 01:29:18.698041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.291 qpair failed and we were unable to recover it. 00:28:56.291 [2024-07-25 01:29:18.698580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.291 [2024-07-25 01:29:18.698597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.291 qpair failed and we were unable to recover it. 00:28:56.291 [2024-07-25 01:29:18.699102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.291 [2024-07-25 01:29:18.699119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.291 qpair failed and we were unable to recover it. 00:28:56.291 [2024-07-25 01:29:18.699560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.291 [2024-07-25 01:29:18.699591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.291 qpair failed and we were unable to recover it. 00:28:56.291 [2024-07-25 01:29:18.700134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.291 [2024-07-25 01:29:18.700167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.291 qpair failed and we were unable to recover it. 00:28:56.291 [2024-07-25 01:29:18.700737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.291 [2024-07-25 01:29:18.700769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.291 qpair failed and we were unable to recover it. 00:28:56.291 [2024-07-25 01:29:18.701336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.291 [2024-07-25 01:29:18.701370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.292 qpair failed and we were unable to recover it. 00:28:56.292 [2024-07-25 01:29:18.701930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.292 [2024-07-25 01:29:18.701962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.292 qpair failed and we were unable to recover it. 00:28:56.292 [2024-07-25 01:29:18.702421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.292 [2024-07-25 01:29:18.702455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.292 qpair failed and we were unable to recover it. 00:28:56.292 [2024-07-25 01:29:18.702914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.292 [2024-07-25 01:29:18.702945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.292 qpair failed and we were unable to recover it. 00:28:56.292 [2024-07-25 01:29:18.703460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.292 [2024-07-25 01:29:18.703493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.292 qpair failed and we were unable to recover it. 00:28:56.292 [2024-07-25 01:29:18.703937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.292 [2024-07-25 01:29:18.703968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.292 qpair failed and we were unable to recover it. 00:28:56.292 [2024-07-25 01:29:18.704529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.292 [2024-07-25 01:29:18.704561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.292 qpair failed and we were unable to recover it. 00:28:56.292 [2024-07-25 01:29:18.705109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.292 [2024-07-25 01:29:18.705142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.292 qpair failed and we were unable to recover it. 00:28:56.292 [2024-07-25 01:29:18.705706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.292 [2024-07-25 01:29:18.705739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.292 qpair failed and we were unable to recover it. 00:28:56.292 [2024-07-25 01:29:18.706312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.292 [2024-07-25 01:29:18.706345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.292 qpair failed and we were unable to recover it. 00:28:56.292 [2024-07-25 01:29:18.706848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.292 [2024-07-25 01:29:18.706879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.292 qpair failed and we were unable to recover it. 00:28:56.292 [2024-07-25 01:29:18.707453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.292 [2024-07-25 01:29:18.707492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.292 qpair failed and we were unable to recover it. 00:28:56.292 [2024-07-25 01:29:18.708053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.292 [2024-07-25 01:29:18.708085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.292 qpair failed and we were unable to recover it. 00:28:56.292 [2024-07-25 01:29:18.708570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.292 [2024-07-25 01:29:18.708603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.292 qpair failed and we were unable to recover it. 00:28:56.292 [2024-07-25 01:29:18.709088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.292 [2024-07-25 01:29:18.709122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.292 qpair failed and we were unable to recover it. 00:28:56.292 [2024-07-25 01:29:18.709602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.292 [2024-07-25 01:29:18.709634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.292 qpair failed and we were unable to recover it. 00:28:56.292 [2024-07-25 01:29:18.710156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.292 [2024-07-25 01:29:18.710188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.292 qpair failed and we were unable to recover it. 00:28:56.292 [2024-07-25 01:29:18.710667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.292 [2024-07-25 01:29:18.710698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.292 qpair failed and we were unable to recover it. 00:28:56.292 [2024-07-25 01:29:18.711254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.292 [2024-07-25 01:29:18.711286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.292 qpair failed and we were unable to recover it. 00:28:56.292 [2024-07-25 01:29:18.711783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.292 [2024-07-25 01:29:18.711814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.292 qpair failed and we were unable to recover it. 00:28:56.292 [2024-07-25 01:29:18.712249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.292 [2024-07-25 01:29:18.712266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.292 qpair failed and we were unable to recover it. 00:28:56.292 [2024-07-25 01:29:18.712798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.292 [2024-07-25 01:29:18.712814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.292 qpair failed and we were unable to recover it. 00:28:56.292 [2024-07-25 01:29:18.713298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.292 [2024-07-25 01:29:18.713331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.292 qpair failed and we were unable to recover it. 00:28:56.292 [2024-07-25 01:29:18.713734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.292 [2024-07-25 01:29:18.713765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.292 qpair failed and we were unable to recover it. 00:28:56.292 [2024-07-25 01:29:18.714247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.292 [2024-07-25 01:29:18.714280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.292 qpair failed and we were unable to recover it. 00:28:56.292 [2024-07-25 01:29:18.714859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.292 [2024-07-25 01:29:18.714892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.292 qpair failed and we were unable to recover it. 00:28:56.292 [2024-07-25 01:29:18.715341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.292 [2024-07-25 01:29:18.715373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.292 qpair failed and we were unable to recover it. 00:28:56.292 [2024-07-25 01:29:18.715901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.292 [2024-07-25 01:29:18.715917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.292 qpair failed and we were unable to recover it. 00:28:56.292 [2024-07-25 01:29:18.716397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.292 [2024-07-25 01:29:18.716430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.292 qpair failed and we were unable to recover it. 00:28:56.292 [2024-07-25 01:29:18.716977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.292 [2024-07-25 01:29:18.717008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.292 qpair failed and we were unable to recover it. 00:28:56.292 [2024-07-25 01:29:18.717487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.292 [2024-07-25 01:29:18.717520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.292 qpair failed and we were unable to recover it. 00:28:56.292 [2024-07-25 01:29:18.718065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.292 [2024-07-25 01:29:18.718099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.292 qpair failed and we were unable to recover it. 00:28:56.292 [2024-07-25 01:29:18.718623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.292 [2024-07-25 01:29:18.718656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.292 qpair failed and we were unable to recover it. 00:28:56.292 [2024-07-25 01:29:18.719110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.292 [2024-07-25 01:29:18.719143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.292 qpair failed and we were unable to recover it. 00:28:56.292 [2024-07-25 01:29:18.719703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.292 [2024-07-25 01:29:18.719735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.292 qpair failed and we were unable to recover it. 00:28:56.292 [2024-07-25 01:29:18.720294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.292 [2024-07-25 01:29:18.720326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.292 qpair failed and we were unable to recover it. 00:28:56.292 [2024-07-25 01:29:18.720870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.292 [2024-07-25 01:29:18.720903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.292 qpair failed and we were unable to recover it. 00:28:56.292 [2024-07-25 01:29:18.721370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.292 [2024-07-25 01:29:18.721402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.293 qpair failed and we were unable to recover it. 00:28:56.293 [2024-07-25 01:29:18.721947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.293 [2024-07-25 01:29:18.721979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.293 qpair failed and we were unable to recover it. 00:28:56.293 [2024-07-25 01:29:18.722539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.293 [2024-07-25 01:29:18.722573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.293 qpair failed and we were unable to recover it. 00:28:56.293 [2024-07-25 01:29:18.723064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.293 [2024-07-25 01:29:18.723098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.293 qpair failed and we were unable to recover it. 00:28:56.293 [2024-07-25 01:29:18.723643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.293 [2024-07-25 01:29:18.723674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.293 qpair failed and we were unable to recover it. 00:28:56.293 [2024-07-25 01:29:18.724141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.293 [2024-07-25 01:29:18.724173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.293 qpair failed and we were unable to recover it. 00:28:56.293 [2024-07-25 01:29:18.724707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.293 [2024-07-25 01:29:18.724739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.293 qpair failed and we were unable to recover it. 00:28:56.293 [2024-07-25 01:29:18.725319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.293 [2024-07-25 01:29:18.725352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.293 qpair failed and we were unable to recover it. 00:28:56.293 [2024-07-25 01:29:18.725911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.293 [2024-07-25 01:29:18.725943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.293 qpair failed and we were unable to recover it. 00:28:56.293 [2024-07-25 01:29:18.726472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.293 [2024-07-25 01:29:18.726504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.293 qpair failed and we were unable to recover it. 00:28:56.293 [2024-07-25 01:29:18.727034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.293 [2024-07-25 01:29:18.727075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.293 qpair failed and we were unable to recover it. 00:28:56.293 [2024-07-25 01:29:18.727556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.293 [2024-07-25 01:29:18.727587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.293 qpair failed and we were unable to recover it. 00:28:56.293 [2024-07-25 01:29:18.728146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.293 [2024-07-25 01:29:18.728180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.293 qpair failed and we were unable to recover it. 00:28:56.293 [2024-07-25 01:29:18.728704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.293 [2024-07-25 01:29:18.728736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.293 qpair failed and we were unable to recover it. 00:28:56.293 [2024-07-25 01:29:18.729262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.293 [2024-07-25 01:29:18.729302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.293 qpair failed and we were unable to recover it. 00:28:56.293 [2024-07-25 01:29:18.729841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.293 [2024-07-25 01:29:18.729873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.293 qpair failed and we were unable to recover it. 00:28:56.293 [2024-07-25 01:29:18.730454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.293 [2024-07-25 01:29:18.730487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.293 qpair failed and we were unable to recover it. 00:28:56.293 [2024-07-25 01:29:18.731070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.293 [2024-07-25 01:29:18.731103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.293 qpair failed and we were unable to recover it. 00:28:56.293 [2024-07-25 01:29:18.731503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.293 [2024-07-25 01:29:18.731535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.293 qpair failed and we were unable to recover it. 00:28:56.293 [2024-07-25 01:29:18.732068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.293 [2024-07-25 01:29:18.732102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.293 qpair failed and we were unable to recover it. 00:28:56.293 [2024-07-25 01:29:18.732689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.293 [2024-07-25 01:29:18.732722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.293 qpair failed and we were unable to recover it. 00:28:56.293 [2024-07-25 01:29:18.733303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.293 [2024-07-25 01:29:18.733337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.293 qpair failed and we were unable to recover it. 00:28:56.293 [2024-07-25 01:29:18.733925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.293 [2024-07-25 01:29:18.733956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.293 qpair failed and we were unable to recover it. 00:28:56.293 [2024-07-25 01:29:18.734534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.293 [2024-07-25 01:29:18.734568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.293 qpair failed and we were unable to recover it. 00:28:56.293 [2024-07-25 01:29:18.735134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.293 [2024-07-25 01:29:18.735167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.293 qpair failed and we were unable to recover it. 00:28:56.293 [2024-07-25 01:29:18.735705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.293 [2024-07-25 01:29:18.735736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.293 qpair failed and we were unable to recover it. 00:28:56.293 [2024-07-25 01:29:18.736325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.293 [2024-07-25 01:29:18.736358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.293 qpair failed and we were unable to recover it. 00:28:56.293 [2024-07-25 01:29:18.736942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.293 [2024-07-25 01:29:18.736975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.293 qpair failed and we were unable to recover it. 00:28:56.293 [2024-07-25 01:29:18.737489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.293 [2024-07-25 01:29:18.737522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.293 qpair failed and we were unable to recover it. 00:28:56.293 [2024-07-25 01:29:18.738068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.293 [2024-07-25 01:29:18.738101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.293 qpair failed and we were unable to recover it. 00:28:56.293 [2024-07-25 01:29:18.738683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.293 [2024-07-25 01:29:18.738715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.293 qpair failed and we were unable to recover it. 00:28:56.293 [2024-07-25 01:29:18.739201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.293 [2024-07-25 01:29:18.739234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.293 qpair failed and we were unable to recover it. 00:28:56.293 [2024-07-25 01:29:18.739792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.293 [2024-07-25 01:29:18.739824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.293 qpair failed and we were unable to recover it. 00:28:56.293 [2024-07-25 01:29:18.740383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.293 [2024-07-25 01:29:18.740416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.293 qpair failed and we were unable to recover it. 00:28:56.293 [2024-07-25 01:29:18.740979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.293 [2024-07-25 01:29:18.741010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.293 qpair failed and we were unable to recover it. 00:28:56.293 [2024-07-25 01:29:18.741606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.293 [2024-07-25 01:29:18.741640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.293 qpair failed and we were unable to recover it. 00:28:56.293 [2024-07-25 01:29:18.742200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.293 [2024-07-25 01:29:18.742233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.293 qpair failed and we were unable to recover it. 00:28:56.293 [2024-07-25 01:29:18.742817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.294 [2024-07-25 01:29:18.742848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.294 qpair failed and we were unable to recover it. 00:28:56.294 [2024-07-25 01:29:18.743395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.294 [2024-07-25 01:29:18.743428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.294 qpair failed and we were unable to recover it. 00:28:56.294 [2024-07-25 01:29:18.744021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.294 [2024-07-25 01:29:18.744064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.294 qpair failed and we were unable to recover it. 00:28:56.294 [2024-07-25 01:29:18.744616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.294 [2024-07-25 01:29:18.744648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.294 qpair failed and we were unable to recover it. 00:28:56.294 [2024-07-25 01:29:18.745232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.294 [2024-07-25 01:29:18.745267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.294 qpair failed and we were unable to recover it. 00:28:56.294 [2024-07-25 01:29:18.745812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.294 [2024-07-25 01:29:18.745844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.294 qpair failed and we were unable to recover it. 00:28:56.294 [2024-07-25 01:29:18.746412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.294 [2024-07-25 01:29:18.746446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.294 qpair failed and we were unable to recover it. 00:28:56.294 [2024-07-25 01:29:18.747011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.294 [2024-07-25 01:29:18.747055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.294 qpair failed and we were unable to recover it. 00:28:56.294 [2024-07-25 01:29:18.747589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.294 [2024-07-25 01:29:18.747622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.294 qpair failed and we were unable to recover it. 00:28:56.294 [2024-07-25 01:29:18.748202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.294 [2024-07-25 01:29:18.748235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.294 qpair failed and we were unable to recover it. 00:28:56.294 [2024-07-25 01:29:18.748806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.294 [2024-07-25 01:29:18.748837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.294 qpair failed and we were unable to recover it. 00:28:56.294 [2024-07-25 01:29:18.749328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.294 [2024-07-25 01:29:18.749361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.294 qpair failed and we were unable to recover it. 00:28:56.294 [2024-07-25 01:29:18.749849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.294 [2024-07-25 01:29:18.749880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.294 qpair failed and we were unable to recover it. 00:28:56.294 [2024-07-25 01:29:18.750422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.294 [2024-07-25 01:29:18.750455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.294 qpair failed and we were unable to recover it. 00:28:56.294 [2024-07-25 01:29:18.750966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.294 [2024-07-25 01:29:18.750999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.294 qpair failed and we were unable to recover it. 00:28:56.294 [2024-07-25 01:29:18.751562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.294 [2024-07-25 01:29:18.751596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.294 qpair failed and we were unable to recover it. 00:28:56.294 [2024-07-25 01:29:18.752175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.294 [2024-07-25 01:29:18.752208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.294 qpair failed and we were unable to recover it. 00:28:56.294 [2024-07-25 01:29:18.752665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.294 [2024-07-25 01:29:18.752703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.294 qpair failed and we were unable to recover it. 00:28:56.294 [2024-07-25 01:29:18.753154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.294 [2024-07-25 01:29:18.753171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.294 qpair failed and we were unable to recover it. 00:28:56.294 [2024-07-25 01:29:18.753646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.294 [2024-07-25 01:29:18.753677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.294 qpair failed and we were unable to recover it. 00:28:56.294 [2024-07-25 01:29:18.754210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.294 [2024-07-25 01:29:18.754244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.294 qpair failed and we were unable to recover it. 00:28:56.294 [2024-07-25 01:29:18.754698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.294 [2024-07-25 01:29:18.754729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.294 qpair failed and we were unable to recover it. 00:28:56.294 [2024-07-25 01:29:18.755120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.294 [2024-07-25 01:29:18.755152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.294 qpair failed and we were unable to recover it. 00:28:56.294 [2024-07-25 01:29:18.755713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.294 [2024-07-25 01:29:18.755744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.294 qpair failed and we were unable to recover it. 00:28:56.294 [2024-07-25 01:29:18.756232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.294 [2024-07-25 01:29:18.756265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.294 qpair failed and we were unable to recover it. 00:28:56.294 [2024-07-25 01:29:18.756725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.294 [2024-07-25 01:29:18.756756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.294 qpair failed and we were unable to recover it. 00:28:56.294 [2024-07-25 01:29:18.757297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.294 [2024-07-25 01:29:18.757330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.294 qpair failed and we were unable to recover it. 00:28:56.294 [2024-07-25 01:29:18.757912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.294 [2024-07-25 01:29:18.757944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.294 qpair failed and we were unable to recover it. 00:28:56.294 [2024-07-25 01:29:18.758502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.294 [2024-07-25 01:29:18.758535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.294 qpair failed and we were unable to recover it. 00:28:56.294 [2024-07-25 01:29:18.759079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.294 [2024-07-25 01:29:18.759122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.294 qpair failed and we were unable to recover it. 00:28:56.294 [2024-07-25 01:29:18.759620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.294 [2024-07-25 01:29:18.759651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.294 qpair failed and we were unable to recover it. 00:28:56.294 [2024-07-25 01:29:18.760242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.294 [2024-07-25 01:29:18.760275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.294 qpair failed and we were unable to recover it. 00:28:56.294 [2024-07-25 01:29:18.760855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.294 [2024-07-25 01:29:18.760887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.294 qpair failed and we were unable to recover it. 00:28:56.294 [2024-07-25 01:29:18.761306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.294 [2024-07-25 01:29:18.761339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.294 qpair failed and we were unable to recover it. 00:28:56.294 [2024-07-25 01:29:18.761875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.294 [2024-07-25 01:29:18.761907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.294 qpair failed and we were unable to recover it. 00:28:56.294 [2024-07-25 01:29:18.762324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.294 [2024-07-25 01:29:18.762358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.294 qpair failed and we were unable to recover it. 00:28:56.294 [2024-07-25 01:29:18.762849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.294 [2024-07-25 01:29:18.762881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.294 qpair failed and we were unable to recover it. 00:28:56.295 [2024-07-25 01:29:18.763447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.295 [2024-07-25 01:29:18.763480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.295 qpair failed and we were unable to recover it. 00:28:56.295 [2024-07-25 01:29:18.764052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.295 [2024-07-25 01:29:18.764069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.295 qpair failed and we were unable to recover it. 00:28:56.295 [2024-07-25 01:29:18.764565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.295 [2024-07-25 01:29:18.764597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.295 qpair failed and we were unable to recover it. 00:28:56.563 [2024-07-25 01:29:18.765085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.563 [2024-07-25 01:29:18.765119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.563 qpair failed and we were unable to recover it. 00:28:56.563 [2024-07-25 01:29:18.765580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.563 [2024-07-25 01:29:18.765612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.563 qpair failed and we were unable to recover it. 00:28:56.563 [2024-07-25 01:29:18.766179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.563 [2024-07-25 01:29:18.766213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.563 qpair failed and we were unable to recover it. 00:28:56.563 [2024-07-25 01:29:18.766781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.563 [2024-07-25 01:29:18.766812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.563 qpair failed and we were unable to recover it. 00:28:56.563 [2024-07-25 01:29:18.767396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.563 [2024-07-25 01:29:18.767430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.563 qpair failed and we were unable to recover it. 00:28:56.563 [2024-07-25 01:29:18.767992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.563 [2024-07-25 01:29:18.768024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.563 qpair failed and we were unable to recover it. 00:28:56.563 [2024-07-25 01:29:18.768566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.564 [2024-07-25 01:29:18.768599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.564 qpair failed and we were unable to recover it. 00:28:56.564 [2024-07-25 01:29:18.769128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.564 [2024-07-25 01:29:18.769162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.564 qpair failed and we were unable to recover it. 00:28:56.564 [2024-07-25 01:29:18.769621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.564 [2024-07-25 01:29:18.769653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.564 qpair failed and we were unable to recover it. 00:28:56.564 [2024-07-25 01:29:18.770182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.564 [2024-07-25 01:29:18.770215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.564 qpair failed and we were unable to recover it. 00:28:56.564 [2024-07-25 01:29:18.770751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.564 [2024-07-25 01:29:18.770783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.564 qpair failed and we were unable to recover it. 00:28:56.564 [2024-07-25 01:29:18.771320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.564 [2024-07-25 01:29:18.771353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.564 qpair failed and we were unable to recover it. 00:28:56.564 [2024-07-25 01:29:18.771936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.564 [2024-07-25 01:29:18.771968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.564 qpair failed and we were unable to recover it. 00:28:56.564 [2024-07-25 01:29:18.772433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.564 [2024-07-25 01:29:18.772466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.564 qpair failed and we were unable to recover it. 00:28:56.564 [2024-07-25 01:29:18.773007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.564 [2024-07-25 01:29:18.773038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.564 qpair failed and we were unable to recover it. 00:28:56.564 [2024-07-25 01:29:18.773629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.564 [2024-07-25 01:29:18.773662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.564 qpair failed and we were unable to recover it. 00:28:56.564 [2024-07-25 01:29:18.774257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.564 [2024-07-25 01:29:18.774291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.564 qpair failed and we were unable to recover it. 00:28:56.564 [2024-07-25 01:29:18.774771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.564 [2024-07-25 01:29:18.774809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.564 qpair failed and we were unable to recover it. 00:28:56.564 [2024-07-25 01:29:18.775280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.564 [2024-07-25 01:29:18.775313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.564 qpair failed and we were unable to recover it. 00:28:56.564 [2024-07-25 01:29:18.775854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.564 [2024-07-25 01:29:18.775889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.564 qpair failed and we were unable to recover it. 00:28:56.564 [2024-07-25 01:29:18.776478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.564 [2024-07-25 01:29:18.776512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.564 qpair failed and we were unable to recover it. 00:28:56.564 [2024-07-25 01:29:18.777099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.564 [2024-07-25 01:29:18.777132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.564 qpair failed and we were unable to recover it. 00:28:56.564 [2024-07-25 01:29:18.777715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.564 [2024-07-25 01:29:18.777749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.564 qpair failed and we were unable to recover it. 00:28:56.564 [2024-07-25 01:29:18.778260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.564 [2024-07-25 01:29:18.778292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.564 qpair failed and we were unable to recover it. 00:28:56.564 [2024-07-25 01:29:18.778855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.564 [2024-07-25 01:29:18.778887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.564 qpair failed and we were unable to recover it. 00:28:56.564 [2024-07-25 01:29:18.779460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.564 [2024-07-25 01:29:18.779493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.564 qpair failed and we were unable to recover it. 00:28:56.564 [2024-07-25 01:29:18.780064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.564 [2024-07-25 01:29:18.780097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.564 qpair failed and we were unable to recover it. 00:28:56.564 [2024-07-25 01:29:18.780624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.564 [2024-07-25 01:29:18.780656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.564 qpair failed and we were unable to recover it. 00:28:56.564 [2024-07-25 01:29:18.781214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.564 [2024-07-25 01:29:18.781248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.564 qpair failed and we were unable to recover it. 00:28:56.564 [2024-07-25 01:29:18.781828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.564 [2024-07-25 01:29:18.781860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.564 qpair failed and we were unable to recover it. 00:28:56.564 [2024-07-25 01:29:18.782373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.564 [2024-07-25 01:29:18.782407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.564 qpair failed and we were unable to recover it. 00:28:56.564 [2024-07-25 01:29:18.782928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.564 [2024-07-25 01:29:18.782961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.564 qpair failed and we were unable to recover it. 00:28:56.564 [2024-07-25 01:29:18.783517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.564 [2024-07-25 01:29:18.783550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.564 qpair failed and we were unable to recover it. 00:28:56.564 [2024-07-25 01:29:18.784088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.564 [2024-07-25 01:29:18.784122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.564 qpair failed and we were unable to recover it. 00:28:56.564 [2024-07-25 01:29:18.784612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.564 [2024-07-25 01:29:18.784644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.564 qpair failed and we were unable to recover it. 00:28:56.564 [2024-07-25 01:29:18.785093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.564 [2024-07-25 01:29:18.785125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.564 qpair failed and we were unable to recover it. 00:28:56.564 [2024-07-25 01:29:18.785580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.564 [2024-07-25 01:29:18.785612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.564 qpair failed and we were unable to recover it. 00:28:56.564 [2024-07-25 01:29:18.786171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.564 [2024-07-25 01:29:18.786204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.564 qpair failed and we were unable to recover it. 00:28:56.564 [2024-07-25 01:29:18.786782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.564 [2024-07-25 01:29:18.786815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.564 qpair failed and we were unable to recover it. 00:28:56.564 [2024-07-25 01:29:18.787347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.564 [2024-07-25 01:29:18.787380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.564 qpair failed and we were unable to recover it. 00:28:56.564 [2024-07-25 01:29:18.787896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.564 [2024-07-25 01:29:18.787928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.564 qpair failed and we were unable to recover it. 00:28:56.564 [2024-07-25 01:29:18.788483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.564 [2024-07-25 01:29:18.788500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.564 qpair failed and we were unable to recover it. 00:28:56.564 [2024-07-25 01:29:18.788908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.564 [2024-07-25 01:29:18.788924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.564 qpair failed and we were unable to recover it. 00:28:56.564 [2024-07-25 01:29:18.789496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.565 [2024-07-25 01:29:18.789529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.565 qpair failed and we were unable to recover it. 00:28:56.565 [2024-07-25 01:29:18.790038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.565 [2024-07-25 01:29:18.790084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.565 qpair failed and we were unable to recover it. 00:28:56.565 [2024-07-25 01:29:18.790564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.565 [2024-07-25 01:29:18.790595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.565 qpair failed and we were unable to recover it. 00:28:56.565 [2024-07-25 01:29:18.791138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.565 [2024-07-25 01:29:18.791172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.565 qpair failed and we were unable to recover it. 00:28:56.565 [2024-07-25 01:29:18.791686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.565 [2024-07-25 01:29:18.791718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.565 qpair failed and we were unable to recover it. 00:28:56.565 [2024-07-25 01:29:18.792175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.565 [2024-07-25 01:29:18.792222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.565 qpair failed and we were unable to recover it. 00:28:56.565 [2024-07-25 01:29:18.792676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.565 [2024-07-25 01:29:18.792708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.565 qpair failed and we were unable to recover it. 00:28:56.565 [2024-07-25 01:29:18.793257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.565 [2024-07-25 01:29:18.793290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.565 qpair failed and we were unable to recover it. 00:28:56.565 [2024-07-25 01:29:18.793803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.565 [2024-07-25 01:29:18.793839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.565 qpair failed and we were unable to recover it. 00:28:56.565 [2024-07-25 01:29:18.794307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.565 [2024-07-25 01:29:18.794340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.565 qpair failed and we were unable to recover it. 00:28:56.565 [2024-07-25 01:29:18.794792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.565 [2024-07-25 01:29:18.794809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.565 qpair failed and we were unable to recover it. 00:28:56.565 [2024-07-25 01:29:18.795235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.565 [2024-07-25 01:29:18.795268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.565 qpair failed and we were unable to recover it. 00:28:56.565 [2024-07-25 01:29:18.795832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.565 [2024-07-25 01:29:18.795864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.565 qpair failed and we were unable to recover it. 00:28:56.565 [2024-07-25 01:29:18.796426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.565 [2024-07-25 01:29:18.796459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.565 qpair failed and we were unable to recover it. 00:28:56.565 [2024-07-25 01:29:18.796940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.565 [2024-07-25 01:29:18.796978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.565 qpair failed and we were unable to recover it. 00:28:56.565 [2024-07-25 01:29:18.797539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.565 [2024-07-25 01:29:18.797571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.565 qpair failed and we were unable to recover it. 00:28:56.565 [2024-07-25 01:29:18.798140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.565 [2024-07-25 01:29:18.798173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.565 qpair failed and we were unable to recover it. 00:28:56.565 [2024-07-25 01:29:18.798717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.565 [2024-07-25 01:29:18.798748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.565 qpair failed and we were unable to recover it. 00:28:56.565 [2024-07-25 01:29:18.799266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.565 [2024-07-25 01:29:18.799298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.565 qpair failed and we were unable to recover it. 00:28:56.565 [2024-07-25 01:29:18.799850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.565 [2024-07-25 01:29:18.799882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.565 qpair failed and we were unable to recover it. 00:28:56.565 [2024-07-25 01:29:18.800443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.565 [2024-07-25 01:29:18.800476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.565 qpair failed and we were unable to recover it. 00:28:56.565 [2024-07-25 01:29:18.801033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.565 [2024-07-25 01:29:18.801084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.565 qpair failed and we were unable to recover it. 00:28:56.565 [2024-07-25 01:29:18.801620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.565 [2024-07-25 01:29:18.801652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.565 qpair failed and we were unable to recover it. 00:28:56.565 [2024-07-25 01:29:18.802188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.565 [2024-07-25 01:29:18.802222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.565 qpair failed and we were unable to recover it. 00:28:56.565 [2024-07-25 01:29:18.802688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.565 [2024-07-25 01:29:18.802720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.565 qpair failed and we were unable to recover it. 00:28:56.565 [2024-07-25 01:29:18.803252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.565 [2024-07-25 01:29:18.803285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.565 qpair failed and we were unable to recover it. 00:28:56.565 [2024-07-25 01:29:18.803791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.565 [2024-07-25 01:29:18.803822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.565 qpair failed and we were unable to recover it. 00:28:56.565 [2024-07-25 01:29:18.804279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.565 [2024-07-25 01:29:18.804297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.565 qpair failed and we were unable to recover it. 00:28:56.565 [2024-07-25 01:29:18.804816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.565 [2024-07-25 01:29:18.804832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.565 qpair failed and we were unable to recover it. 00:28:56.565 [2024-07-25 01:29:18.805379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.565 [2024-07-25 01:29:18.805413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.565 qpair failed and we were unable to recover it. 00:28:56.565 [2024-07-25 01:29:18.805947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.565 [2024-07-25 01:29:18.805979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.565 qpair failed and we were unable to recover it. 00:28:56.565 [2024-07-25 01:29:18.806541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.565 [2024-07-25 01:29:18.806573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.565 qpair failed and we were unable to recover it. 00:28:56.565 [2024-07-25 01:29:18.807135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.565 [2024-07-25 01:29:18.807169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.565 qpair failed and we were unable to recover it. 00:28:56.565 [2024-07-25 01:29:18.807650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.565 [2024-07-25 01:29:18.807682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.565 qpair failed and we were unable to recover it. 00:28:56.565 [2024-07-25 01:29:18.808198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.565 [2024-07-25 01:29:18.808215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.565 qpair failed and we were unable to recover it. 00:28:56.565 [2024-07-25 01:29:18.808720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.565 [2024-07-25 01:29:18.808753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.565 qpair failed and we were unable to recover it. 00:28:56.565 [2024-07-25 01:29:18.809342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.565 [2024-07-25 01:29:18.809374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.565 qpair failed and we were unable to recover it. 00:28:56.565 [2024-07-25 01:29:18.809960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.565 [2024-07-25 01:29:18.809991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.566 qpair failed and we were unable to recover it. 00:28:56.566 [2024-07-25 01:29:18.810534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.566 [2024-07-25 01:29:18.810568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.566 qpair failed and we were unable to recover it. 00:28:56.566 [2024-07-25 01:29:18.811028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.566 [2024-07-25 01:29:18.811071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.566 qpair failed and we were unable to recover it. 00:28:56.566 [2024-07-25 01:29:18.811607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.566 [2024-07-25 01:29:18.811639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.566 qpair failed and we were unable to recover it. 00:28:56.566 [2024-07-25 01:29:18.812227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.566 [2024-07-25 01:29:18.812261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.566 qpair failed and we were unable to recover it. 00:28:56.566 [2024-07-25 01:29:18.812744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.566 [2024-07-25 01:29:18.812775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.566 qpair failed and we were unable to recover it. 00:28:56.566 [2024-07-25 01:29:18.813313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.566 [2024-07-25 01:29:18.813347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.566 qpair failed and we were unable to recover it. 00:28:56.566 [2024-07-25 01:29:18.813931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.566 [2024-07-25 01:29:18.813963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.566 qpair failed and we were unable to recover it. 00:28:56.566 [2024-07-25 01:29:18.814541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.566 [2024-07-25 01:29:18.814573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.566 qpair failed and we were unable to recover it. 00:28:56.566 [2024-07-25 01:29:18.815135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.566 [2024-07-25 01:29:18.815169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.566 qpair failed and we were unable to recover it. 00:28:56.566 [2024-07-25 01:29:18.815688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.566 [2024-07-25 01:29:18.815719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.566 qpair failed and we were unable to recover it. 00:28:56.566 [2024-07-25 01:29:18.816183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.566 [2024-07-25 01:29:18.816216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.566 qpair failed and we were unable to recover it. 00:28:56.566 [2024-07-25 01:29:18.816771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.566 [2024-07-25 01:29:18.816802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.566 qpair failed and we were unable to recover it. 00:28:56.566 [2024-07-25 01:29:18.817388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.566 [2024-07-25 01:29:18.817421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.566 qpair failed and we were unable to recover it. 00:28:56.566 [2024-07-25 01:29:18.818010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.566 [2024-07-25 01:29:18.818054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.566 qpair failed and we were unable to recover it. 00:28:56.566 [2024-07-25 01:29:18.818623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.566 [2024-07-25 01:29:18.818656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.566 qpair failed and we were unable to recover it. 00:28:56.566 [2024-07-25 01:29:18.819196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.566 [2024-07-25 01:29:18.819229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.566 qpair failed and we were unable to recover it. 00:28:56.566 [2024-07-25 01:29:18.819772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.566 [2024-07-25 01:29:18.819809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.566 qpair failed and we were unable to recover it. 00:28:56.566 [2024-07-25 01:29:18.820349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.566 [2024-07-25 01:29:18.820384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.566 qpair failed and we were unable to recover it. 00:28:56.566 [2024-07-25 01:29:18.820921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.566 [2024-07-25 01:29:18.820952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.566 qpair failed and we were unable to recover it. 00:28:56.566 [2024-07-25 01:29:18.821362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.566 [2024-07-25 01:29:18.821395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.566 qpair failed and we were unable to recover it. 00:28:56.566 [2024-07-25 01:29:18.821848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.566 [2024-07-25 01:29:18.821880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.566 qpair failed and we were unable to recover it. 00:28:56.566 [2024-07-25 01:29:18.822391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.566 [2024-07-25 01:29:18.822424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.566 qpair failed and we were unable to recover it. 00:28:56.566 [2024-07-25 01:29:18.822986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.566 [2024-07-25 01:29:18.823018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.566 qpair failed and we were unable to recover it. 00:28:56.566 [2024-07-25 01:29:18.823609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.566 [2024-07-25 01:29:18.823641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.566 qpair failed and we were unable to recover it. 00:28:56.566 [2024-07-25 01:29:18.824230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.566 [2024-07-25 01:29:18.824264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.566 qpair failed and we were unable to recover it. 00:28:56.566 [2024-07-25 01:29:18.824853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.566 [2024-07-25 01:29:18.824885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.566 qpair failed and we were unable to recover it. 00:28:56.566 [2024-07-25 01:29:18.825469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.566 [2024-07-25 01:29:18.825503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.566 qpair failed and we were unable to recover it. 00:28:56.566 [2024-07-25 01:29:18.826023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.566 [2024-07-25 01:29:18.826064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.566 qpair failed and we were unable to recover it. 00:28:56.566 [2024-07-25 01:29:18.826605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.566 [2024-07-25 01:29:18.826638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.566 qpair failed and we were unable to recover it. 00:28:56.566 [2024-07-25 01:29:18.827239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.566 [2024-07-25 01:29:18.827272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.566 qpair failed and we were unable to recover it. 00:28:56.566 [2024-07-25 01:29:18.827819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.566 [2024-07-25 01:29:18.827851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.566 qpair failed and we were unable to recover it. 00:28:56.566 [2024-07-25 01:29:18.828384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.566 [2024-07-25 01:29:18.828418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.566 qpair failed and we were unable to recover it. 00:28:56.566 [2024-07-25 01:29:18.828991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.566 [2024-07-25 01:29:18.829022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.566 qpair failed and we were unable to recover it. 00:28:56.566 [2024-07-25 01:29:18.829576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.566 [2024-07-25 01:29:18.829609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.566 qpair failed and we were unable to recover it. 00:28:56.566 [2024-07-25 01:29:18.830144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.566 [2024-07-25 01:29:18.830176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.566 qpair failed and we were unable to recover it. 00:28:56.566 [2024-07-25 01:29:18.830760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.566 [2024-07-25 01:29:18.830791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.566 qpair failed and we were unable to recover it. 00:28:56.566 [2024-07-25 01:29:18.831368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.566 [2024-07-25 01:29:18.831401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.566 qpair failed and we were unable to recover it. 00:28:56.567 [2024-07-25 01:29:18.831943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.567 [2024-07-25 01:29:18.831975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.567 qpair failed and we were unable to recover it. 00:28:56.567 [2024-07-25 01:29:18.832512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.567 [2024-07-25 01:29:18.832544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.567 qpair failed and we were unable to recover it. 00:28:56.567 [2024-07-25 01:29:18.833074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.567 [2024-07-25 01:29:18.833108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.567 qpair failed and we were unable to recover it. 00:28:56.567 [2024-07-25 01:29:18.833649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.567 [2024-07-25 01:29:18.833681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.567 qpair failed and we were unable to recover it. 00:28:56.567 [2024-07-25 01:29:18.834262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.567 [2024-07-25 01:29:18.834295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.567 qpair failed and we were unable to recover it. 00:28:56.567 [2024-07-25 01:29:18.834882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.567 [2024-07-25 01:29:18.834914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.567 qpair failed and we were unable to recover it. 00:28:56.567 [2024-07-25 01:29:18.835505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.567 [2024-07-25 01:29:18.835539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.567 qpair failed and we were unable to recover it. 00:28:56.567 [2024-07-25 01:29:18.836125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.567 [2024-07-25 01:29:18.836159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.567 qpair failed and we were unable to recover it. 00:28:56.567 [2024-07-25 01:29:18.836651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.567 [2024-07-25 01:29:18.836682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.567 qpair failed and we were unable to recover it. 00:28:56.567 [2024-07-25 01:29:18.837146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.567 [2024-07-25 01:29:18.837179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.567 qpair failed and we were unable to recover it. 00:28:56.567 [2024-07-25 01:29:18.837715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.567 [2024-07-25 01:29:18.837746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.567 qpair failed and we were unable to recover it. 00:28:56.567 [2024-07-25 01:29:18.838334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.567 [2024-07-25 01:29:18.838366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.567 qpair failed and we were unable to recover it. 00:28:56.567 [2024-07-25 01:29:18.838954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.567 [2024-07-25 01:29:18.838986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.567 qpair failed and we were unable to recover it. 00:28:56.567 [2024-07-25 01:29:18.839482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.567 [2024-07-25 01:29:18.839515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.567 qpair failed and we were unable to recover it. 00:28:56.567 [2024-07-25 01:29:18.840029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.567 [2024-07-25 01:29:18.840072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.567 qpair failed and we were unable to recover it. 00:28:56.567 [2024-07-25 01:29:18.840585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.567 [2024-07-25 01:29:18.840616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.567 qpair failed and we were unable to recover it. 00:28:56.567 [2024-07-25 01:29:18.841108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.567 [2024-07-25 01:29:18.841142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.567 qpair failed and we were unable to recover it. 00:28:56.567 [2024-07-25 01:29:18.841694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.567 [2024-07-25 01:29:18.841726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.567 qpair failed and we were unable to recover it. 00:28:56.567 [2024-07-25 01:29:18.842235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.567 [2024-07-25 01:29:18.842269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.567 qpair failed and we were unable to recover it. 00:28:56.567 [2024-07-25 01:29:18.842831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.567 [2024-07-25 01:29:18.842868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.567 qpair failed and we were unable to recover it. 00:28:56.567 [2024-07-25 01:29:18.843429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.567 [2024-07-25 01:29:18.843461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.567 qpair failed and we were unable to recover it. 00:28:56.567 [2024-07-25 01:29:18.843976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.567 [2024-07-25 01:29:18.844007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.567 qpair failed and we were unable to recover it. 00:28:56.567 [2024-07-25 01:29:18.844572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.567 [2024-07-25 01:29:18.844605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.567 qpair failed and we were unable to recover it. 00:28:56.567 [2024-07-25 01:29:18.845151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.567 [2024-07-25 01:29:18.845184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.567 qpair failed and we were unable to recover it. 00:28:56.567 [2024-07-25 01:29:18.845748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.567 [2024-07-25 01:29:18.845779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.567 qpair failed and we were unable to recover it. 00:28:56.567 [2024-07-25 01:29:18.846345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.567 [2024-07-25 01:29:18.846378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.567 qpair failed and we were unable to recover it. 00:28:56.567 [2024-07-25 01:29:18.846918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.567 [2024-07-25 01:29:18.846950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.567 qpair failed and we were unable to recover it. 00:28:56.567 [2024-07-25 01:29:18.847465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.567 [2024-07-25 01:29:18.847497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.567 qpair failed and we were unable to recover it. 00:28:56.567 [2024-07-25 01:29:18.848069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.567 [2024-07-25 01:29:18.848102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.567 qpair failed and we were unable to recover it. 00:28:56.567 [2024-07-25 01:29:18.848689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.567 [2024-07-25 01:29:18.848720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.567 qpair failed and we were unable to recover it. 00:28:56.567 [2024-07-25 01:29:18.849303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.567 [2024-07-25 01:29:18.849336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.567 qpair failed and we were unable to recover it. 00:28:56.567 [2024-07-25 01:29:18.849918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.567 [2024-07-25 01:29:18.849949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.567 qpair failed and we were unable to recover it. 00:28:56.567 [2024-07-25 01:29:18.850492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.567 [2024-07-25 01:29:18.850525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.567 qpair failed and we were unable to recover it. 00:28:56.567 [2024-07-25 01:29:18.851066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.567 [2024-07-25 01:29:18.851100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.567 qpair failed and we were unable to recover it. 00:28:56.567 [2024-07-25 01:29:18.851678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.567 [2024-07-25 01:29:18.851710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.567 qpair failed and we were unable to recover it. 00:28:56.567 [2024-07-25 01:29:18.852260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.567 [2024-07-25 01:29:18.852293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.567 qpair failed and we were unable to recover it. 00:28:56.567 [2024-07-25 01:29:18.852865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.567 [2024-07-25 01:29:18.852897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.567 qpair failed and we were unable to recover it. 00:28:56.567 [2024-07-25 01:29:18.853411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.567 [2024-07-25 01:29:18.853445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.568 qpair failed and we were unable to recover it. 00:28:56.568 [2024-07-25 01:29:18.853940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.568 [2024-07-25 01:29:18.853972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.568 qpair failed and we were unable to recover it. 00:28:56.568 [2024-07-25 01:29:18.854511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.568 [2024-07-25 01:29:18.854543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.568 qpair failed and we were unable to recover it. 00:28:56.568 [2024-07-25 01:29:18.855076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.568 [2024-07-25 01:29:18.855108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.568 qpair failed and we were unable to recover it. 00:28:56.568 [2024-07-25 01:29:18.855594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.568 [2024-07-25 01:29:18.855625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.568 qpair failed and we were unable to recover it. 00:28:56.568 [2024-07-25 01:29:18.856186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.568 [2024-07-25 01:29:18.856219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.568 qpair failed and we were unable to recover it. 00:28:56.568 [2024-07-25 01:29:18.856779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.568 [2024-07-25 01:29:18.856812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.568 qpair failed and we were unable to recover it. 00:28:56.568 [2024-07-25 01:29:18.857362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.568 [2024-07-25 01:29:18.857395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.568 qpair failed and we were unable to recover it. 00:28:56.568 [2024-07-25 01:29:18.857961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.568 [2024-07-25 01:29:18.857993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.568 qpair failed and we were unable to recover it. 00:28:56.568 [2024-07-25 01:29:18.858482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.568 [2024-07-25 01:29:18.858514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.568 qpair failed and we were unable to recover it. 00:28:56.568 [2024-07-25 01:29:18.858974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.568 [2024-07-25 01:29:18.859006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.568 qpair failed and we were unable to recover it. 00:28:56.568 [2024-07-25 01:29:18.859561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.568 [2024-07-25 01:29:18.859594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.568 qpair failed and we were unable to recover it. 00:28:56.568 [2024-07-25 01:29:18.860105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.568 [2024-07-25 01:29:18.860139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.568 qpair failed and we were unable to recover it. 00:28:56.568 [2024-07-25 01:29:18.860591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.568 [2024-07-25 01:29:18.860623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.568 qpair failed and we were unable to recover it. 00:28:56.568 [2024-07-25 01:29:18.861112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.568 [2024-07-25 01:29:18.861145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.568 qpair failed and we were unable to recover it. 00:28:56.568 [2024-07-25 01:29:18.861684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.568 [2024-07-25 01:29:18.861715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.568 qpair failed and we were unable to recover it. 00:28:56.568 [2024-07-25 01:29:18.862206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.568 [2024-07-25 01:29:18.862240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.568 qpair failed and we were unable to recover it. 00:28:56.568 [2024-07-25 01:29:18.862803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.568 [2024-07-25 01:29:18.862834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.568 qpair failed and we were unable to recover it. 00:28:56.568 [2024-07-25 01:29:18.863311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.568 [2024-07-25 01:29:18.863344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.568 qpair failed and we were unable to recover it. 00:28:56.568 [2024-07-25 01:29:18.863860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.568 [2024-07-25 01:29:18.863892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.568 qpair failed and we were unable to recover it. 00:28:56.568 [2024-07-25 01:29:18.864453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.568 [2024-07-25 01:29:18.864485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.568 qpair failed and we were unable to recover it. 00:28:56.568 [2024-07-25 01:29:18.865059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.568 [2024-07-25 01:29:18.865093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.568 qpair failed and we were unable to recover it. 00:28:56.568 [2024-07-25 01:29:18.865618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.568 [2024-07-25 01:29:18.865663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.568 qpair failed and we were unable to recover it. 00:28:56.568 [2024-07-25 01:29:18.866180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.568 [2024-07-25 01:29:18.866214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.568 qpair failed and we were unable to recover it. 00:28:56.568 [2024-07-25 01:29:18.866791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.568 [2024-07-25 01:29:18.866822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.568 qpair failed and we were unable to recover it. 00:28:56.568 [2024-07-25 01:29:18.867404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.568 [2024-07-25 01:29:18.867437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.568 qpair failed and we were unable to recover it. 00:28:56.568 [2024-07-25 01:29:18.868034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.568 [2024-07-25 01:29:18.868074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.568 qpair failed and we were unable to recover it. 00:28:56.568 [2024-07-25 01:29:18.868649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.568 [2024-07-25 01:29:18.868681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.568 qpair failed and we were unable to recover it. 00:28:56.568 [2024-07-25 01:29:18.869147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.568 [2024-07-25 01:29:18.869180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.568 qpair failed and we were unable to recover it. 00:28:56.568 [2024-07-25 01:29:18.869713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.568 [2024-07-25 01:29:18.869744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.568 qpair failed and we were unable to recover it. 00:28:56.568 [2024-07-25 01:29:18.870328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.568 [2024-07-25 01:29:18.870362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.568 qpair failed and we were unable to recover it. 00:28:56.569 [2024-07-25 01:29:18.870908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.569 [2024-07-25 01:29:18.870940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.569 qpair failed and we were unable to recover it. 00:28:56.569 [2024-07-25 01:29:18.871476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.569 [2024-07-25 01:29:18.871493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.569 qpair failed and we were unable to recover it. 00:28:56.569 [2024-07-25 01:29:18.872001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.569 [2024-07-25 01:29:18.872017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.569 qpair failed and we were unable to recover it. 00:28:56.569 [2024-07-25 01:29:18.872452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.569 [2024-07-25 01:29:18.872469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.569 qpair failed and we were unable to recover it. 00:28:56.569 [2024-07-25 01:29:18.872835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.569 [2024-07-25 01:29:18.872851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.569 qpair failed and we were unable to recover it. 00:28:56.569 [2024-07-25 01:29:18.873374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.569 [2024-07-25 01:29:18.873391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.569 qpair failed and we were unable to recover it. 00:28:56.569 [2024-07-25 01:29:18.873917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.569 [2024-07-25 01:29:18.873933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.569 qpair failed and we were unable to recover it. 00:28:56.569 [2024-07-25 01:29:18.874449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.569 [2024-07-25 01:29:18.874481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.569 qpair failed and we were unable to recover it. 00:28:56.569 [2024-07-25 01:29:18.874884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.569 [2024-07-25 01:29:18.874917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.569 qpair failed and we were unable to recover it. 00:28:56.569 [2024-07-25 01:29:18.875478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.569 [2024-07-25 01:29:18.875511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.569 qpair failed and we were unable to recover it. 00:28:56.569 [2024-07-25 01:29:18.875934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.569 [2024-07-25 01:29:18.875965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.569 qpair failed and we were unable to recover it. 00:28:56.569 [2024-07-25 01:29:18.876374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.569 [2024-07-25 01:29:18.876406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.569 qpair failed and we were unable to recover it. 00:28:56.569 [2024-07-25 01:29:18.876955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.569 [2024-07-25 01:29:18.876971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.569 qpair failed and we were unable to recover it. 00:28:56.569 [2024-07-25 01:29:18.877475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.569 [2024-07-25 01:29:18.877492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.569 qpair failed and we were unable to recover it. 00:28:56.569 [2024-07-25 01:29:18.877923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.569 [2024-07-25 01:29:18.877939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.569 qpair failed and we were unable to recover it. 00:28:56.569 [2024-07-25 01:29:18.878419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.569 [2024-07-25 01:29:18.878436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.569 qpair failed and we were unable to recover it. 00:28:56.569 [2024-07-25 01:29:18.878952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.569 [2024-07-25 01:29:18.878983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.569 qpair failed and we were unable to recover it. 00:28:56.569 [2024-07-25 01:29:18.879555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.569 [2024-07-25 01:29:18.879589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.569 qpair failed and we were unable to recover it. 00:28:56.569 [2024-07-25 01:29:18.880197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.569 [2024-07-25 01:29:18.880231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.569 qpair failed and we were unable to recover it. 00:28:56.569 [2024-07-25 01:29:18.880723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.569 [2024-07-25 01:29:18.880755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.569 qpair failed and we were unable to recover it. 00:28:56.569 [2024-07-25 01:29:18.881243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.569 [2024-07-25 01:29:18.881276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.569 qpair failed and we were unable to recover it. 00:28:56.569 [2024-07-25 01:29:18.881816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.569 [2024-07-25 01:29:18.881848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.569 qpair failed and we were unable to recover it. 00:28:56.569 [2024-07-25 01:29:18.882386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.569 [2024-07-25 01:29:18.882419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.569 qpair failed and we were unable to recover it. 00:28:56.569 [2024-07-25 01:29:18.882858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.569 [2024-07-25 01:29:18.882874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.569 qpair failed and we were unable to recover it. 00:28:56.569 [2024-07-25 01:29:18.883398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.569 [2024-07-25 01:29:18.883432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.569 qpair failed and we were unable to recover it. 00:28:56.569 [2024-07-25 01:29:18.884021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.569 [2024-07-25 01:29:18.884062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.569 qpair failed and we were unable to recover it. 00:28:56.569 [2024-07-25 01:29:18.884623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.569 [2024-07-25 01:29:18.884654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.569 qpair failed and we were unable to recover it. 00:28:56.569 [2024-07-25 01:29:18.885201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.569 [2024-07-25 01:29:18.885234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.569 qpair failed and we were unable to recover it. 00:28:56.569 [2024-07-25 01:29:18.886009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.569 [2024-07-25 01:29:18.886056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.569 qpair failed and we were unable to recover it. 00:28:56.569 [2024-07-25 01:29:18.886617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.569 [2024-07-25 01:29:18.886650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.569 qpair failed and we were unable to recover it. 00:28:56.569 [2024-07-25 01:29:18.887199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.569 [2024-07-25 01:29:18.887233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.569 qpair failed and we were unable to recover it. 00:28:56.569 [2024-07-25 01:29:18.887697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.569 [2024-07-25 01:29:18.887734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.569 qpair failed and we were unable to recover it. 00:28:56.569 [2024-07-25 01:29:18.888272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.569 [2024-07-25 01:29:18.888305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.569 qpair failed and we were unable to recover it. 00:28:56.569 [2024-07-25 01:29:18.888844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.569 [2024-07-25 01:29:18.888877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.569 qpair failed and we were unable to recover it. 00:28:56.569 [2024-07-25 01:29:18.889343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.569 [2024-07-25 01:29:18.889376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.569 qpair failed and we were unable to recover it. 00:28:56.569 [2024-07-25 01:29:18.889886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.569 [2024-07-25 01:29:18.889902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.569 qpair failed and we were unable to recover it. 00:28:56.569 [2024-07-25 01:29:18.890311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.569 [2024-07-25 01:29:18.890342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.569 qpair failed and we were unable to recover it. 00:28:56.569 [2024-07-25 01:29:18.890936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.569 [2024-07-25 01:29:18.890967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.570 qpair failed and we were unable to recover it. 00:28:56.570 [2024-07-25 01:29:18.891495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.570 [2024-07-25 01:29:18.891529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.570 qpair failed and we were unable to recover it. 00:28:56.570 [2024-07-25 01:29:18.892064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.570 [2024-07-25 01:29:18.892096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.570 qpair failed and we were unable to recover it. 00:28:56.570 [2024-07-25 01:29:18.892667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.570 [2024-07-25 01:29:18.892698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.570 qpair failed and we were unable to recover it. 00:28:56.570 [2024-07-25 01:29:18.893265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.570 [2024-07-25 01:29:18.893298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.570 qpair failed and we were unable to recover it. 00:28:56.570 [2024-07-25 01:29:18.893775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.570 [2024-07-25 01:29:18.893806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.570 qpair failed and we were unable to recover it. 00:28:56.570 [2024-07-25 01:29:18.894331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.570 [2024-07-25 01:29:18.894364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.570 qpair failed and we were unable to recover it. 00:28:56.570 [2024-07-25 01:29:18.894921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.570 [2024-07-25 01:29:18.894953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.570 qpair failed and we were unable to recover it. 00:28:56.570 [2024-07-25 01:29:18.895449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.570 [2024-07-25 01:29:18.895481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.570 qpair failed and we were unable to recover it. 00:28:56.570 [2024-07-25 01:29:18.896230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.570 [2024-07-25 01:29:18.896276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.570 qpair failed and we were unable to recover it. 00:28:56.570 [2024-07-25 01:29:18.896706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.570 [2024-07-25 01:29:18.896737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.570 qpair failed and we were unable to recover it. 00:28:56.570 [2024-07-25 01:29:18.897273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.570 [2024-07-25 01:29:18.897307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.570 qpair failed and we were unable to recover it. 00:28:56.570 [2024-07-25 01:29:18.897710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.570 [2024-07-25 01:29:18.897742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.570 qpair failed and we were unable to recover it. 00:28:56.570 [2024-07-25 01:29:18.898196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.570 [2024-07-25 01:29:18.898229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.570 qpair failed and we were unable to recover it. 00:28:56.570 [2024-07-25 01:29:18.898714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.570 [2024-07-25 01:29:18.898730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.570 qpair failed and we were unable to recover it. 00:28:56.570 [2024-07-25 01:29:18.899176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.570 [2024-07-25 01:29:18.899209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.570 qpair failed and we were unable to recover it. 00:28:56.570 [2024-07-25 01:29:18.899671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.570 [2024-07-25 01:29:18.899703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.570 qpair failed and we were unable to recover it. 00:28:56.570 [2024-07-25 01:29:18.900159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.570 [2024-07-25 01:29:18.900192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.570 qpair failed and we were unable to recover it. 00:28:56.570 [2024-07-25 01:29:18.900731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.570 [2024-07-25 01:29:18.900763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.570 qpair failed and we were unable to recover it. 00:28:56.570 [2024-07-25 01:29:18.901344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.570 [2024-07-25 01:29:18.901377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.570 qpair failed and we were unable to recover it. 00:28:56.570 [2024-07-25 01:29:18.901878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.570 [2024-07-25 01:29:18.901894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.570 qpair failed and we were unable to recover it. 00:28:56.570 [2024-07-25 01:29:18.902405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.570 [2024-07-25 01:29:18.902423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.570 qpair failed and we were unable to recover it. 00:28:56.570 [2024-07-25 01:29:18.902888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.570 [2024-07-25 01:29:18.902904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.570 qpair failed and we were unable to recover it. 00:28:56.570 [2024-07-25 01:29:18.903366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.570 [2024-07-25 01:29:18.903383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.570 qpair failed and we were unable to recover it. 00:28:56.570 [2024-07-25 01:29:18.903814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.570 [2024-07-25 01:29:18.903830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.570 qpair failed and we were unable to recover it. 00:28:56.570 [2024-07-25 01:29:18.904215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.570 [2024-07-25 01:29:18.904232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.570 qpair failed and we were unable to recover it. 00:28:56.570 [2024-07-25 01:29:18.904736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.570 [2024-07-25 01:29:18.904768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.570 qpair failed and we were unable to recover it. 00:28:56.570 [2024-07-25 01:29:18.905254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.570 [2024-07-25 01:29:18.905286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.570 qpair failed and we were unable to recover it. 00:28:56.570 [2024-07-25 01:29:18.905809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.570 [2024-07-25 01:29:18.905840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.570 qpair failed and we were unable to recover it. 00:28:56.570 [2024-07-25 01:29:18.906378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.570 [2024-07-25 01:29:18.906411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.570 qpair failed and we were unable to recover it. 00:28:56.570 [2024-07-25 01:29:18.906890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.570 [2024-07-25 01:29:18.906906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.570 qpair failed and we were unable to recover it. 00:28:56.570 [2024-07-25 01:29:18.907332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.570 [2024-07-25 01:29:18.907349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.570 qpair failed and we were unable to recover it. 00:28:56.570 [2024-07-25 01:29:18.907855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.570 [2024-07-25 01:29:18.907887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.570 qpair failed and we were unable to recover it. 00:28:56.570 [2024-07-25 01:29:18.908466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.570 [2024-07-25 01:29:18.908483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.570 qpair failed and we were unable to recover it. 00:28:56.570 [2024-07-25 01:29:18.908998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.570 [2024-07-25 01:29:18.909036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.570 qpair failed and we were unable to recover it. 00:28:56.570 [2024-07-25 01:29:18.909541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.570 [2024-07-25 01:29:18.909573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.570 qpair failed and we were unable to recover it. 00:28:56.570 [2024-07-25 01:29:18.910056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.570 [2024-07-25 01:29:18.910089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.570 qpair failed and we were unable to recover it. 00:28:56.570 [2024-07-25 01:29:18.910623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.570 [2024-07-25 01:29:18.910640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.570 qpair failed and we were unable to recover it. 00:28:56.570 [2024-07-25 01:29:18.911077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.571 [2024-07-25 01:29:18.911110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.571 qpair failed and we were unable to recover it. 00:28:56.571 [2024-07-25 01:29:18.911653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.571 [2024-07-25 01:29:18.911684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.571 qpair failed and we were unable to recover it. 00:28:56.571 [2024-07-25 01:29:18.912252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.571 [2024-07-25 01:29:18.912285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.571 qpair failed and we were unable to recover it. 00:28:56.571 [2024-07-25 01:29:18.912825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.571 [2024-07-25 01:29:18.912856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.571 qpair failed and we were unable to recover it. 00:28:56.571 [2024-07-25 01:29:18.913315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.571 [2024-07-25 01:29:18.913348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.571 qpair failed and we were unable to recover it. 00:28:56.571 [2024-07-25 01:29:18.913819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.571 [2024-07-25 01:29:18.913852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.571 qpair failed and we were unable to recover it. 00:28:56.571 [2024-07-25 01:29:18.914297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.571 [2024-07-25 01:29:18.914329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.571 qpair failed and we were unable to recover it. 00:28:56.571 [2024-07-25 01:29:18.914865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.571 [2024-07-25 01:29:18.914881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.571 qpair failed and we were unable to recover it. 00:28:56.571 [2024-07-25 01:29:18.915382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.571 [2024-07-25 01:29:18.915425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.571 qpair failed and we were unable to recover it. 00:28:56.571 [2024-07-25 01:29:18.915861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.571 [2024-07-25 01:29:18.915893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.571 qpair failed and we were unable to recover it. 00:28:56.571 [2024-07-25 01:29:18.916412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.571 [2024-07-25 01:29:18.916445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.571 qpair failed and we were unable to recover it. 00:28:56.571 [2024-07-25 01:29:18.916993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.571 [2024-07-25 01:29:18.917025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.571 qpair failed and we were unable to recover it. 00:28:56.571 [2024-07-25 01:29:18.917518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.571 [2024-07-25 01:29:18.917551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.571 qpair failed and we were unable to recover it. 00:28:56.571 [2024-07-25 01:29:18.918011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.571 [2024-07-25 01:29:18.918027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.571 qpair failed and we were unable to recover it. 00:28:56.571 [2024-07-25 01:29:18.918535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.571 [2024-07-25 01:29:18.918551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.571 qpair failed and we were unable to recover it. 00:28:56.571 [2024-07-25 01:29:18.918978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.571 [2024-07-25 01:29:18.919010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.571 qpair failed and we were unable to recover it. 00:28:56.571 [2024-07-25 01:29:18.919562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.571 [2024-07-25 01:29:18.919579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.571 qpair failed and we were unable to recover it. 00:28:56.571 [2024-07-25 01:29:18.920104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.571 [2024-07-25 01:29:18.920138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.571 qpair failed and we were unable to recover it. 00:28:56.571 [2024-07-25 01:29:18.920614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.571 [2024-07-25 01:29:18.920645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.571 qpair failed and we were unable to recover it. 00:28:56.571 [2024-07-25 01:29:18.921186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.571 [2024-07-25 01:29:18.921219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.571 qpair failed and we were unable to recover it. 00:28:56.571 [2024-07-25 01:29:18.921805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.571 [2024-07-25 01:29:18.921836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.571 qpair failed and we were unable to recover it. 00:28:56.571 [2024-07-25 01:29:18.922454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.571 [2024-07-25 01:29:18.922487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.571 qpair failed and we were unable to recover it. 00:28:56.571 [2024-07-25 01:29:18.923050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.571 [2024-07-25 01:29:18.923083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.571 qpair failed and we were unable to recover it. 00:28:56.571 [2024-07-25 01:29:18.923668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.571 [2024-07-25 01:29:18.923701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.571 qpair failed and we were unable to recover it. 00:28:56.571 [2024-07-25 01:29:18.924160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.571 [2024-07-25 01:29:18.924194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.571 qpair failed and we were unable to recover it. 00:28:56.571 [2024-07-25 01:29:18.924712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.571 [2024-07-25 01:29:18.924755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.571 qpair failed and we were unable to recover it. 00:28:56.571 [2024-07-25 01:29:18.925194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.571 [2024-07-25 01:29:18.925211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.571 qpair failed and we were unable to recover it. 00:28:56.571 [2024-07-25 01:29:18.925660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.571 [2024-07-25 01:29:18.925691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.571 qpair failed and we were unable to recover it. 00:28:56.571 [2024-07-25 01:29:18.926207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.571 [2024-07-25 01:29:18.926240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.571 qpair failed and we were unable to recover it. 00:28:56.571 [2024-07-25 01:29:18.926778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.571 [2024-07-25 01:29:18.926810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.571 qpair failed and we were unable to recover it. 00:28:56.571 [2024-07-25 01:29:18.927269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.571 [2024-07-25 01:29:18.927302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.571 qpair failed and we were unable to recover it. 00:28:56.571 [2024-07-25 01:29:18.927753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.571 [2024-07-25 01:29:18.927784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.571 qpair failed and we were unable to recover it. 00:28:56.571 [2024-07-25 01:29:18.928348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.571 [2024-07-25 01:29:18.928381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.571 qpair failed and we were unable to recover it. 00:28:56.571 [2024-07-25 01:29:18.928859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.571 [2024-07-25 01:29:18.928890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.571 qpair failed and we were unable to recover it. 00:28:56.571 [2024-07-25 01:29:18.929378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.571 [2024-07-25 01:29:18.929412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.571 qpair failed and we were unable to recover it. 00:28:56.571 [2024-07-25 01:29:18.929864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.571 [2024-07-25 01:29:18.929895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.571 qpair failed and we were unable to recover it. 00:28:56.571 [2024-07-25 01:29:18.930355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.571 [2024-07-25 01:29:18.930388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.571 qpair failed and we were unable to recover it. 00:28:56.571 [2024-07-25 01:29:18.930918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.571 [2024-07-25 01:29:18.930950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.572 qpair failed and we were unable to recover it. 00:28:56.572 [2024-07-25 01:29:18.931512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.572 [2024-07-25 01:29:18.931545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.572 qpair failed and we were unable to recover it. 00:28:56.572 [2024-07-25 01:29:18.932099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.572 [2024-07-25 01:29:18.932132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.572 qpair failed and we were unable to recover it. 00:28:56.572 [2024-07-25 01:29:18.932671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.572 [2024-07-25 01:29:18.932702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.572 qpair failed and we were unable to recover it. 00:28:56.572 [2024-07-25 01:29:18.933247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.572 [2024-07-25 01:29:18.933281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.572 qpair failed and we were unable to recover it. 00:28:56.572 [2024-07-25 01:29:18.933873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.572 [2024-07-25 01:29:18.933904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.572 qpair failed and we were unable to recover it. 00:28:56.572 [2024-07-25 01:29:18.934421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.572 [2024-07-25 01:29:18.934455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.572 qpair failed and we were unable to recover it. 00:28:56.572 [2024-07-25 01:29:18.934872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.572 [2024-07-25 01:29:18.934904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.572 qpair failed and we were unable to recover it. 00:28:56.572 [2024-07-25 01:29:18.935438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.572 [2024-07-25 01:29:18.935472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.572 qpair failed and we were unable to recover it. 00:28:56.572 [2024-07-25 01:29:18.936009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.572 [2024-07-25 01:29:18.936041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.572 qpair failed and we were unable to recover it. 00:28:56.572 [2024-07-25 01:29:18.936641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.572 [2024-07-25 01:29:18.936673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.572 qpair failed and we were unable to recover it. 00:28:56.572 [2024-07-25 01:29:18.937218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.572 [2024-07-25 01:29:18.937235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.572 qpair failed and we were unable to recover it. 00:28:56.572 [2024-07-25 01:29:18.937668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.572 [2024-07-25 01:29:18.937684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.572 qpair failed and we were unable to recover it. 00:28:56.572 [2024-07-25 01:29:18.938114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.572 [2024-07-25 01:29:18.938130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.572 qpair failed and we were unable to recover it. 00:28:56.572 [2024-07-25 01:29:18.938579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.572 [2024-07-25 01:29:18.938595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.572 qpair failed and we were unable to recover it. 00:28:56.572 [2024-07-25 01:29:18.939020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.572 [2024-07-25 01:29:18.939035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.572 qpair failed and we were unable to recover it. 00:28:56.572 [2024-07-25 01:29:18.939472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.572 [2024-07-25 01:29:18.939488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.572 qpair failed and we were unable to recover it. 00:28:56.572 [2024-07-25 01:29:18.939926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.572 [2024-07-25 01:29:18.939942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.572 qpair failed and we were unable to recover it. 00:28:56.572 [2024-07-25 01:29:18.940447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.572 [2024-07-25 01:29:18.940464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.572 qpair failed and we were unable to recover it. 00:28:56.572 [2024-07-25 01:29:18.940903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.572 [2024-07-25 01:29:18.940918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.572 qpair failed and we were unable to recover it. 00:28:56.572 [2024-07-25 01:29:18.941426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.572 [2024-07-25 01:29:18.941443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.572 qpair failed and we were unable to recover it. 00:28:56.572 [2024-07-25 01:29:18.941987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.572 [2024-07-25 01:29:18.942003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.572 qpair failed and we were unable to recover it. 00:28:56.572 [2024-07-25 01:29:18.942462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.572 [2024-07-25 01:29:18.942478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.572 qpair failed and we were unable to recover it. 00:28:56.572 [2024-07-25 01:29:18.943019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.572 [2024-07-25 01:29:18.943036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.572 qpair failed and we were unable to recover it. 00:28:56.572 [2024-07-25 01:29:18.943500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.572 [2024-07-25 01:29:18.943517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.572 qpair failed and we were unable to recover it. 00:28:56.572 [2024-07-25 01:29:18.943875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.572 [2024-07-25 01:29:18.943891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.572 qpair failed and we were unable to recover it. 00:28:56.572 [2024-07-25 01:29:18.944399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.572 [2024-07-25 01:29:18.944418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.572 qpair failed and we were unable to recover it. 00:28:56.572 [2024-07-25 01:29:18.944976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.572 [2024-07-25 01:29:18.944992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.572 qpair failed and we were unable to recover it. 00:28:56.572 [2024-07-25 01:29:18.945461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.572 [2024-07-25 01:29:18.945479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.572 qpair failed and we were unable to recover it. 00:28:56.572 [2024-07-25 01:29:18.945978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.572 [2024-07-25 01:29:18.945994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.572 qpair failed and we were unable to recover it. 00:28:56.572 [2024-07-25 01:29:18.946498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.572 [2024-07-25 01:29:18.946514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.572 qpair failed and we were unable to recover it. 00:28:56.572 [2024-07-25 01:29:18.946961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.572 [2024-07-25 01:29:18.946977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.572 qpair failed and we were unable to recover it. 00:28:56.572 [2024-07-25 01:29:18.947481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.572 [2024-07-25 01:29:18.947497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.572 qpair failed and we were unable to recover it. 00:28:56.572 [2024-07-25 01:29:18.948039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.572 [2024-07-25 01:29:18.948061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.572 qpair failed and we were unable to recover it. 00:28:56.572 [2024-07-25 01:29:18.948560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.572 [2024-07-25 01:29:18.948576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.572 qpair failed and we were unable to recover it. 00:28:56.572 [2024-07-25 01:29:18.949076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.572 [2024-07-25 01:29:18.949092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.572 qpair failed and we were unable to recover it. 00:28:56.572 [2024-07-25 01:29:18.949621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.572 [2024-07-25 01:29:18.949637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.572 qpair failed and we were unable to recover it. 00:28:56.572 [2024-07-25 01:29:18.950089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.572 [2024-07-25 01:29:18.950105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.572 qpair failed and we were unable to recover it. 00:28:56.572 [2024-07-25 01:29:18.950545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.573 [2024-07-25 01:29:18.950561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.573 qpair failed and we were unable to recover it. 00:28:56.573 [2024-07-25 01:29:18.951039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.573 [2024-07-25 01:29:18.951060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.573 qpair failed and we were unable to recover it. 00:28:56.573 [2024-07-25 01:29:18.951480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.573 [2024-07-25 01:29:18.951496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.573 qpair failed and we were unable to recover it. 00:28:56.573 [2024-07-25 01:29:18.951995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.573 [2024-07-25 01:29:18.952011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.573 qpair failed and we were unable to recover it. 00:28:56.573 [2024-07-25 01:29:18.952517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.573 [2024-07-25 01:29:18.952534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.573 qpair failed and we were unable to recover it. 00:28:56.573 [2024-07-25 01:29:18.953038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.573 [2024-07-25 01:29:18.953062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.573 qpair failed and we were unable to recover it. 00:28:56.573 [2024-07-25 01:29:18.953492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.573 [2024-07-25 01:29:18.953508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.573 qpair failed and we were unable to recover it. 00:28:56.573 [2024-07-25 01:29:18.953919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.573 [2024-07-25 01:29:18.953935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.573 qpair failed and we were unable to recover it. 00:28:56.573 [2024-07-25 01:29:18.954396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.573 [2024-07-25 01:29:18.954413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.573 qpair failed and we were unable to recover it. 00:28:56.573 [2024-07-25 01:29:18.954836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.573 [2024-07-25 01:29:18.954852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.573 qpair failed and we were unable to recover it. 00:28:56.573 [2024-07-25 01:29:18.955274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.573 [2024-07-25 01:29:18.955290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.573 qpair failed and we were unable to recover it. 00:28:56.573 [2024-07-25 01:29:18.955721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.573 [2024-07-25 01:29:18.955737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.573 qpair failed and we were unable to recover it. 00:28:56.573 [2024-07-25 01:29:18.956235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.573 [2024-07-25 01:29:18.956252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.573 qpair failed and we were unable to recover it. 00:28:56.573 [2024-07-25 01:29:18.956757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.573 [2024-07-25 01:29:18.956773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.573 qpair failed and we were unable to recover it. 00:28:56.573 [2024-07-25 01:29:18.957230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.573 [2024-07-25 01:29:18.957247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.573 qpair failed and we were unable to recover it. 00:28:56.573 [2024-07-25 01:29:18.957777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.573 [2024-07-25 01:29:18.957793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.573 qpair failed and we were unable to recover it. 00:28:56.573 [2024-07-25 01:29:18.958347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.573 [2024-07-25 01:29:18.958365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.573 qpair failed and we were unable to recover it. 00:28:56.573 [2024-07-25 01:29:18.958820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.573 [2024-07-25 01:29:18.958836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.573 qpair failed and we were unable to recover it. 00:28:56.573 [2024-07-25 01:29:18.959339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.573 [2024-07-25 01:29:18.959355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.573 qpair failed and we were unable to recover it. 00:28:56.573 [2024-07-25 01:29:18.959903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.573 [2024-07-25 01:29:18.959920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.573 qpair failed and we were unable to recover it. 00:28:56.573 [2024-07-25 01:29:18.960458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.573 [2024-07-25 01:29:18.960475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.573 qpair failed and we were unable to recover it. 00:28:56.573 [2024-07-25 01:29:18.960999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.573 [2024-07-25 01:29:18.961015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.573 qpair failed and we were unable to recover it. 00:28:56.573 [2024-07-25 01:29:18.961519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.573 [2024-07-25 01:29:18.961536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.573 qpair failed and we were unable to recover it. 00:28:56.573 [2024-07-25 01:29:18.962065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.573 [2024-07-25 01:29:18.962083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.573 qpair failed and we were unable to recover it. 00:28:56.573 [2024-07-25 01:29:18.962471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.573 [2024-07-25 01:29:18.962487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.573 qpair failed and we were unable to recover it. 00:28:56.573 [2024-07-25 01:29:18.962988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.573 [2024-07-25 01:29:18.963004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.573 qpair failed and we were unable to recover it. 00:28:56.573 [2024-07-25 01:29:18.963509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.573 [2024-07-25 01:29:18.963526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.573 qpair failed and we were unable to recover it. 00:28:56.573 [2024-07-25 01:29:18.963969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.573 [2024-07-25 01:29:18.963985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.573 qpair failed and we were unable to recover it. 00:28:56.573 [2024-07-25 01:29:18.964467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.573 [2024-07-25 01:29:18.964487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.573 qpair failed and we were unable to recover it. 00:28:56.573 [2024-07-25 01:29:18.964905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.573 [2024-07-25 01:29:18.964921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.573 qpair failed and we were unable to recover it. 00:28:56.573 [2024-07-25 01:29:18.965352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.573 [2024-07-25 01:29:18.965369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.573 qpair failed and we were unable to recover it. 00:28:56.573 [2024-07-25 01:29:18.965804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.573 [2024-07-25 01:29:18.965821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.573 qpair failed and we were unable to recover it. 00:28:56.574 [2024-07-25 01:29:18.966329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.574 [2024-07-25 01:29:18.966346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.574 qpair failed and we were unable to recover it. 00:28:56.574 [2024-07-25 01:29:18.966903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.574 [2024-07-25 01:29:18.966935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.574 qpair failed and we were unable to recover it. 00:28:56.574 [2024-07-25 01:29:18.967387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.574 [2024-07-25 01:29:18.967419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.574 qpair failed and we were unable to recover it. 00:28:56.574 [2024-07-25 01:29:18.967880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.574 [2024-07-25 01:29:18.967923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.574 qpair failed and we were unable to recover it. 00:28:56.574 [2024-07-25 01:29:18.968348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.574 [2024-07-25 01:29:18.968364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.574 qpair failed and we were unable to recover it. 00:28:56.574 [2024-07-25 01:29:18.968817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.574 [2024-07-25 01:29:18.968833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.574 qpair failed and we were unable to recover it. 00:28:56.574 [2024-07-25 01:29:18.969332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.574 [2024-07-25 01:29:18.969349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.574 qpair failed and we were unable to recover it. 00:28:56.574 [2024-07-25 01:29:18.969849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.574 [2024-07-25 01:29:18.969866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.574 qpair failed and we were unable to recover it. 00:28:56.574 [2024-07-25 01:29:18.970284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.574 [2024-07-25 01:29:18.970300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.574 qpair failed and we were unable to recover it. 00:28:56.574 [2024-07-25 01:29:18.970751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.574 [2024-07-25 01:29:18.970766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.574 qpair failed and we were unable to recover it. 00:28:56.574 [2024-07-25 01:29:18.971251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.574 [2024-07-25 01:29:18.971268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.574 qpair failed and we were unable to recover it. 00:28:56.574 [2024-07-25 01:29:18.971749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.574 [2024-07-25 01:29:18.971765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.574 qpair failed and we were unable to recover it. 00:28:56.574 [2024-07-25 01:29:18.972188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.574 [2024-07-25 01:29:18.972205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.574 qpair failed and we were unable to recover it. 00:28:56.574 [2024-07-25 01:29:18.972630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.574 [2024-07-25 01:29:18.972646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.574 qpair failed and we were unable to recover it. 00:28:56.574 [2024-07-25 01:29:18.973126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.574 [2024-07-25 01:29:18.973142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.574 qpair failed and we were unable to recover it. 00:28:56.574 [2024-07-25 01:29:18.973571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.574 [2024-07-25 01:29:18.973587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.574 qpair failed and we were unable to recover it. 00:28:56.574 [2024-07-25 01:29:18.974012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.574 [2024-07-25 01:29:18.974028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.574 qpair failed and we were unable to recover it. 00:28:56.574 [2024-07-25 01:29:18.974532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.574 [2024-07-25 01:29:18.974548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.574 qpair failed and we were unable to recover it. 00:28:56.574 [2024-07-25 01:29:18.975057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.574 [2024-07-25 01:29:18.975074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.574 qpair failed and we were unable to recover it. 00:28:56.574 [2024-07-25 01:29:18.975630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.574 [2024-07-25 01:29:18.975647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.574 qpair failed and we were unable to recover it. 00:28:56.574 [2024-07-25 01:29:18.976135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.574 [2024-07-25 01:29:18.976153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.574 qpair failed and we were unable to recover it. 00:28:56.574 [2024-07-25 01:29:18.976661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.574 [2024-07-25 01:29:18.976678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.574 qpair failed and we were unable to recover it. 00:28:56.574 [2024-07-25 01:29:18.977231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.574 [2024-07-25 01:29:18.977248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.574 qpair failed and we were unable to recover it. 00:28:56.574 [2024-07-25 01:29:18.977707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.574 [2024-07-25 01:29:18.977724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.574 qpair failed and we were unable to recover it. 00:28:56.574 [2024-07-25 01:29:18.978254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.574 [2024-07-25 01:29:18.978270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.574 qpair failed and we were unable to recover it. 00:28:56.574 [2024-07-25 01:29:18.978694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.574 [2024-07-25 01:29:18.978711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.574 qpair failed and we were unable to recover it. 00:28:56.574 [2024-07-25 01:29:18.979156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.574 [2024-07-25 01:29:18.979173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.574 qpair failed and we were unable to recover it. 00:28:56.574 [2024-07-25 01:29:18.979702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.574 [2024-07-25 01:29:18.979718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.574 qpair failed and we were unable to recover it. 00:28:56.574 [2024-07-25 01:29:18.980220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.574 [2024-07-25 01:29:18.980237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.574 qpair failed and we were unable to recover it. 00:28:56.574 [2024-07-25 01:29:18.980608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.574 [2024-07-25 01:29:18.980624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.574 qpair failed and we were unable to recover it. 00:28:56.574 [2024-07-25 01:29:18.981123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.574 [2024-07-25 01:29:18.981140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.574 qpair failed and we were unable to recover it. 00:28:56.574 [2024-07-25 01:29:18.981649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.574 [2024-07-25 01:29:18.981665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.574 qpair failed and we were unable to recover it. 00:28:56.574 [2024-07-25 01:29:18.982183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.574 [2024-07-25 01:29:18.982200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.574 qpair failed and we were unable to recover it. 00:28:56.574 [2024-07-25 01:29:18.982634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.574 [2024-07-25 01:29:18.982650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.574 qpair failed and we were unable to recover it. 00:28:56.574 [2024-07-25 01:29:18.983136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.574 [2024-07-25 01:29:18.983152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.574 qpair failed and we were unable to recover it. 00:28:56.574 [2024-07-25 01:29:18.983635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.574 [2024-07-25 01:29:18.983651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.574 qpair failed and we were unable to recover it. 00:28:56.574 [2024-07-25 01:29:18.984160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.574 [2024-07-25 01:29:18.984182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.574 qpair failed and we were unable to recover it. 00:28:56.574 [2024-07-25 01:29:18.984613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.575 [2024-07-25 01:29:18.984630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.575 qpair failed and we were unable to recover it. 00:28:56.575 [2024-07-25 01:29:18.985135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.575 [2024-07-25 01:29:18.985151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.575 qpair failed and we were unable to recover it. 00:28:56.575 [2024-07-25 01:29:18.985596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.575 [2024-07-25 01:29:18.985613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.575 qpair failed and we were unable to recover it. 00:28:56.575 [2024-07-25 01:29:18.986062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.575 [2024-07-25 01:29:18.986079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.575 qpair failed and we were unable to recover it. 00:28:56.575 [2024-07-25 01:29:18.986575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.575 [2024-07-25 01:29:18.986592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.575 qpair failed and we were unable to recover it. 00:28:56.575 [2024-07-25 01:29:18.987103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.575 [2024-07-25 01:29:18.987121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.575 qpair failed and we were unable to recover it. 00:28:56.575 [2024-07-25 01:29:18.987552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.575 [2024-07-25 01:29:18.987568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.575 qpair failed and we were unable to recover it. 00:28:56.575 [2024-07-25 01:29:18.988067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.575 [2024-07-25 01:29:18.988085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.575 qpair failed and we were unable to recover it. 00:28:56.575 [2024-07-25 01:29:18.988504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.575 [2024-07-25 01:29:18.988520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.575 qpair failed and we were unable to recover it. 00:28:56.575 [2024-07-25 01:29:18.988956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.575 [2024-07-25 01:29:18.988971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.575 qpair failed and we were unable to recover it. 00:28:56.575 [2024-07-25 01:29:18.989346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.575 [2024-07-25 01:29:18.989362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.575 qpair failed and we were unable to recover it. 00:28:56.575 [2024-07-25 01:29:18.989790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.575 [2024-07-25 01:29:18.989806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.575 qpair failed and we were unable to recover it. 00:28:56.575 [2024-07-25 01:29:18.990247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.575 [2024-07-25 01:29:18.990263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.575 qpair failed and we were unable to recover it. 00:28:56.575 [2024-07-25 01:29:18.990768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.575 [2024-07-25 01:29:18.990784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.575 qpair failed and we were unable to recover it. 00:28:56.575 [2024-07-25 01:29:18.991315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.575 [2024-07-25 01:29:18.991331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.575 qpair failed and we were unable to recover it. 00:28:56.575 [2024-07-25 01:29:18.991789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.575 [2024-07-25 01:29:18.991805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.575 qpair failed and we were unable to recover it. 00:28:56.575 [2024-07-25 01:29:18.992313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.575 [2024-07-25 01:29:18.992330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.575 qpair failed and we were unable to recover it. 00:28:56.575 [2024-07-25 01:29:18.992845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.575 [2024-07-25 01:29:18.992862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.575 qpair failed and we were unable to recover it. 00:28:56.575 [2024-07-25 01:29:18.993344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.575 [2024-07-25 01:29:18.993360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.575 qpair failed and we were unable to recover it. 00:28:56.575 [2024-07-25 01:29:18.993889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.575 [2024-07-25 01:29:18.993905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.575 qpair failed and we were unable to recover it. 00:28:56.575 [2024-07-25 01:29:18.994348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.575 [2024-07-25 01:29:18.994365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.575 qpair failed and we were unable to recover it. 00:28:56.575 [2024-07-25 01:29:18.994876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.575 [2024-07-25 01:29:18.994893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.575 qpair failed and we were unable to recover it. 00:28:56.575 [2024-07-25 01:29:18.995427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.575 [2024-07-25 01:29:18.995444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.575 qpair failed and we were unable to recover it. 00:28:56.575 [2024-07-25 01:29:18.995950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.575 [2024-07-25 01:29:18.995966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.575 qpair failed and we were unable to recover it. 00:28:56.575 [2024-07-25 01:29:18.996499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.575 [2024-07-25 01:29:18.996516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.575 qpair failed and we were unable to recover it. 00:28:56.575 [2024-07-25 01:29:18.997012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.575 [2024-07-25 01:29:18.997028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.575 qpair failed and we were unable to recover it. 00:28:56.575 [2024-07-25 01:29:18.997559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.575 [2024-07-25 01:29:18.997576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.575 qpair failed and we were unable to recover it. 00:28:56.575 [2024-07-25 01:29:18.997993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.575 [2024-07-25 01:29:18.998010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.575 qpair failed and we were unable to recover it. 00:28:56.575 [2024-07-25 01:29:18.998510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.575 [2024-07-25 01:29:18.998526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.575 qpair failed and we were unable to recover it. 00:28:56.575 [2024-07-25 01:29:18.999031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.575 [2024-07-25 01:29:18.999053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.575 qpair failed and we were unable to recover it. 00:28:56.575 [2024-07-25 01:29:18.999557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.575 [2024-07-25 01:29:18.999573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.575 qpair failed and we were unable to recover it. 00:28:56.575 [2024-07-25 01:29:19.000088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.575 [2024-07-25 01:29:19.000106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.575 qpair failed and we were unable to recover it. 00:28:56.575 [2024-07-25 01:29:19.000607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.575 [2024-07-25 01:29:19.000623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.575 qpair failed and we were unable to recover it. 00:28:56.575 [2024-07-25 01:29:19.001140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.575 [2024-07-25 01:29:19.001157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.575 qpair failed and we were unable to recover it. 00:28:56.575 [2024-07-25 01:29:19.001664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.575 [2024-07-25 01:29:19.001680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.575 qpair failed and we were unable to recover it. 00:28:56.575 [2024-07-25 01:29:19.002207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.575 [2024-07-25 01:29:19.002224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.575 qpair failed and we were unable to recover it. 00:28:56.575 [2024-07-25 01:29:19.002793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.575 [2024-07-25 01:29:19.002809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.575 qpair failed and we were unable to recover it. 00:28:56.575 [2024-07-25 01:29:19.003353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.575 [2024-07-25 01:29:19.003370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.575 qpair failed and we were unable to recover it. 00:28:56.576 [2024-07-25 01:29:19.003874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.576 [2024-07-25 01:29:19.003889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.576 qpair failed and we were unable to recover it. 00:28:56.576 [2024-07-25 01:29:19.004421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.576 [2024-07-25 01:29:19.004441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.576 qpair failed and we were unable to recover it. 00:28:56.576 [2024-07-25 01:29:19.004924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.576 [2024-07-25 01:29:19.004940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.576 qpair failed and we were unable to recover it. 00:28:56.576 [2024-07-25 01:29:19.005441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.576 [2024-07-25 01:29:19.005457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.576 qpair failed and we were unable to recover it. 00:28:56.576 [2024-07-25 01:29:19.005894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.576 [2024-07-25 01:29:19.005910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.576 qpair failed and we were unable to recover it. 00:28:56.576 [2024-07-25 01:29:19.006412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.576 [2024-07-25 01:29:19.006429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.576 qpair failed and we were unable to recover it. 00:28:56.576 [2024-07-25 01:29:19.006869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.576 [2024-07-25 01:29:19.006885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.576 qpair failed and we were unable to recover it. 00:28:56.576 [2024-07-25 01:29:19.007389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.576 [2024-07-25 01:29:19.007406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.576 qpair failed and we were unable to recover it. 00:28:56.576 [2024-07-25 01:29:19.007950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.576 [2024-07-25 01:29:19.007966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.576 qpair failed and we were unable to recover it. 00:28:56.576 [2024-07-25 01:29:19.008412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.576 [2024-07-25 01:29:19.008429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.576 qpair failed and we were unable to recover it. 00:28:56.576 [2024-07-25 01:29:19.008860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.576 [2024-07-25 01:29:19.008876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.576 qpair failed and we were unable to recover it. 00:28:56.576 [2024-07-25 01:29:19.009358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.576 [2024-07-25 01:29:19.009375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.576 qpair failed and we were unable to recover it. 00:28:56.576 [2024-07-25 01:29:19.009796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.576 [2024-07-25 01:29:19.009811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.576 qpair failed and we were unable to recover it. 00:28:56.576 [2024-07-25 01:29:19.010314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.576 [2024-07-25 01:29:19.010331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.576 qpair failed and we were unable to recover it. 00:28:56.576 [2024-07-25 01:29:19.010881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.576 [2024-07-25 01:29:19.010897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.576 qpair failed and we were unable to recover it. 00:28:56.576 [2024-07-25 01:29:19.011402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.576 [2024-07-25 01:29:19.011418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.576 qpair failed and we were unable to recover it. 00:28:56.576 [2024-07-25 01:29:19.011930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.576 [2024-07-25 01:29:19.011946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.576 qpair failed and we were unable to recover it. 00:28:56.576 [2024-07-25 01:29:19.012466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.576 [2024-07-25 01:29:19.012483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.576 qpair failed and we were unable to recover it. 00:28:56.576 [2024-07-25 01:29:19.013023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.576 [2024-07-25 01:29:19.013039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.576 qpair failed and we were unable to recover it. 00:28:56.576 [2024-07-25 01:29:19.013472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.576 [2024-07-25 01:29:19.013488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.576 qpair failed and we were unable to recover it. 00:28:56.576 [2024-07-25 01:29:19.014004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.576 [2024-07-25 01:29:19.014021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.576 qpair failed and we were unable to recover it. 00:28:56.576 [2024-07-25 01:29:19.014384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.576 [2024-07-25 01:29:19.014401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.576 qpair failed and we were unable to recover it. 00:28:56.576 [2024-07-25 01:29:19.014906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.576 [2024-07-25 01:29:19.014923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.576 qpair failed and we were unable to recover it. 00:28:56.576 [2024-07-25 01:29:19.015388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.576 [2024-07-25 01:29:19.015404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.576 qpair failed and we were unable to recover it. 00:28:56.576 [2024-07-25 01:29:19.015917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.576 [2024-07-25 01:29:19.015933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.576 qpair failed and we were unable to recover it. 00:28:56.576 [2024-07-25 01:29:19.016435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.576 [2024-07-25 01:29:19.016453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.576 qpair failed and we were unable to recover it. 00:28:56.576 [2024-07-25 01:29:19.016885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.576 [2024-07-25 01:29:19.016901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.576 qpair failed and we were unable to recover it. 00:28:56.576 [2024-07-25 01:29:19.017405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.576 [2024-07-25 01:29:19.017422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.576 qpair failed and we were unable to recover it. 00:28:56.576 [2024-07-25 01:29:19.017983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.576 [2024-07-25 01:29:19.017999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.576 qpair failed and we were unable to recover it. 00:28:56.576 [2024-07-25 01:29:19.018483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.576 [2024-07-25 01:29:19.018500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.576 qpair failed and we were unable to recover it. 00:28:56.576 [2024-07-25 01:29:19.019022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.576 [2024-07-25 01:29:19.019039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.576 qpair failed and we were unable to recover it. 00:28:56.576 [2024-07-25 01:29:19.019566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.576 [2024-07-25 01:29:19.019582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.576 qpair failed and we were unable to recover it. 00:28:56.576 [2024-07-25 01:29:19.020077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.576 [2024-07-25 01:29:19.020094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.576 qpair failed and we were unable to recover it. 00:28:56.576 [2024-07-25 01:29:19.020631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.576 [2024-07-25 01:29:19.020647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.576 qpair failed and we were unable to recover it. 00:28:56.576 [2024-07-25 01:29:19.021170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.576 [2024-07-25 01:29:19.021186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.576 qpair failed and we were unable to recover it. 00:28:56.576 [2024-07-25 01:29:19.021683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.576 [2024-07-25 01:29:19.021699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.576 qpair failed and we were unable to recover it. 00:28:56.576 [2024-07-25 01:29:19.022142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.576 [2024-07-25 01:29:19.022159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.576 qpair failed and we were unable to recover it. 00:28:56.576 [2024-07-25 01:29:19.022694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.576 [2024-07-25 01:29:19.022710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.576 qpair failed and we were unable to recover it. 00:28:56.577 [2024-07-25 01:29:19.023147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.577 [2024-07-25 01:29:19.023163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.577 qpair failed and we were unable to recover it. 00:28:56.577 [2024-07-25 01:29:19.023668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.577 [2024-07-25 01:29:19.023683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.577 qpair failed and we were unable to recover it. 00:28:56.577 [2024-07-25 01:29:19.024230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.577 [2024-07-25 01:29:19.024246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.577 qpair failed and we were unable to recover it. 00:28:56.577 [2024-07-25 01:29:19.024749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.577 [2024-07-25 01:29:19.024769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.577 qpair failed and we were unable to recover it. 00:28:56.577 [2024-07-25 01:29:19.025338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.577 [2024-07-25 01:29:19.025354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.577 qpair failed and we were unable to recover it. 00:28:56.577 [2024-07-25 01:29:19.025803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.577 [2024-07-25 01:29:19.025820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.577 qpair failed and we were unable to recover it. 00:28:56.577 [2024-07-25 01:29:19.026325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.577 [2024-07-25 01:29:19.026342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.577 qpair failed and we were unable to recover it. 00:28:56.577 [2024-07-25 01:29:19.026781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.577 [2024-07-25 01:29:19.026798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.577 qpair failed and we were unable to recover it. 00:28:56.577 [2024-07-25 01:29:19.027305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.577 [2024-07-25 01:29:19.027322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.577 qpair failed and we were unable to recover it. 00:28:56.577 [2024-07-25 01:29:19.027872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.577 [2024-07-25 01:29:19.027888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.577 qpair failed and we were unable to recover it. 00:28:56.577 [2024-07-25 01:29:19.028383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.577 [2024-07-25 01:29:19.028400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.577 qpair failed and we were unable to recover it. 00:28:56.577 [2024-07-25 01:29:19.028934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.577 [2024-07-25 01:29:19.028951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.577 qpair failed and we were unable to recover it. 00:28:56.577 [2024-07-25 01:29:19.029455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.577 [2024-07-25 01:29:19.029473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.577 qpair failed and we were unable to recover it. 00:28:56.577 [2024-07-25 01:29:19.030019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.577 [2024-07-25 01:29:19.030035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.577 qpair failed and we were unable to recover it. 00:28:56.577 [2024-07-25 01:29:19.030535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.577 [2024-07-25 01:29:19.030551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.577 qpair failed and we were unable to recover it. 00:28:56.577 [2024-07-25 01:29:19.030980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.577 [2024-07-25 01:29:19.030996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.577 qpair failed and we were unable to recover it. 00:28:56.577 [2024-07-25 01:29:19.031464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.577 [2024-07-25 01:29:19.031481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.577 qpair failed and we were unable to recover it. 00:28:56.577 [2024-07-25 01:29:19.031972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.577 [2024-07-25 01:29:19.031989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.577 qpair failed and we were unable to recover it. 00:28:56.577 [2024-07-25 01:29:19.032468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.577 [2024-07-25 01:29:19.032485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.577 qpair failed and we were unable to recover it. 00:28:56.577 [2024-07-25 01:29:19.032929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.577 [2024-07-25 01:29:19.032945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.577 qpair failed and we were unable to recover it. 00:28:56.577 [2024-07-25 01:29:19.033446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.577 [2024-07-25 01:29:19.033463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.577 qpair failed and we were unable to recover it. 00:28:56.577 [2024-07-25 01:29:19.033991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.577 [2024-07-25 01:29:19.034008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.577 qpair failed and we were unable to recover it. 00:28:56.577 [2024-07-25 01:29:19.034469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.577 [2024-07-25 01:29:19.034486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.577 qpair failed and we were unable to recover it. 00:28:56.577 [2024-07-25 01:29:19.034980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.577 [2024-07-25 01:29:19.034996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.577 qpair failed and we were unable to recover it. 00:28:56.577 [2024-07-25 01:29:19.035530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.577 [2024-07-25 01:29:19.035547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.577 qpair failed and we were unable to recover it. 00:28:56.577 [2024-07-25 01:29:19.035986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.577 [2024-07-25 01:29:19.036002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.577 qpair failed and we were unable to recover it. 00:28:56.577 [2024-07-25 01:29:19.036516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.577 [2024-07-25 01:29:19.036534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.577 qpair failed and we were unable to recover it. 00:28:56.577 [2024-07-25 01:29:19.037072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.577 [2024-07-25 01:29:19.037089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.577 qpair failed and we were unable to recover it. 00:28:56.577 [2024-07-25 01:29:19.037509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.577 [2024-07-25 01:29:19.037525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.577 qpair failed and we were unable to recover it. 00:28:56.577 [2024-07-25 01:29:19.037949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.577 [2024-07-25 01:29:19.037966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.577 qpair failed and we were unable to recover it. 00:28:56.577 [2024-07-25 01:29:19.038436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.577 [2024-07-25 01:29:19.038453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.577 qpair failed and we were unable to recover it. 00:28:56.577 [2024-07-25 01:29:19.038933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.577 [2024-07-25 01:29:19.038950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.577 qpair failed and we were unable to recover it. 00:28:56.578 [2024-07-25 01:29:19.039376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.578 [2024-07-25 01:29:19.039393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.578 qpair failed and we were unable to recover it. 00:28:56.578 [2024-07-25 01:29:19.039869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.578 [2024-07-25 01:29:19.039886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.578 qpair failed and we were unable to recover it. 00:28:56.578 [2024-07-25 01:29:19.040328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.578 [2024-07-25 01:29:19.040345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.578 qpair failed and we were unable to recover it. 00:28:56.578 [2024-07-25 01:29:19.040716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.578 [2024-07-25 01:29:19.040732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.578 qpair failed and we were unable to recover it. 00:28:56.578 [2024-07-25 01:29:19.041239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.578 [2024-07-25 01:29:19.041256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.578 qpair failed and we were unable to recover it. 00:28:56.578 [2024-07-25 01:29:19.041760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.578 [2024-07-25 01:29:19.041776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.578 qpair failed and we were unable to recover it. 00:28:56.578 [2024-07-25 01:29:19.042223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.578 [2024-07-25 01:29:19.042239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.578 qpair failed and we were unable to recover it. 00:28:56.578 [2024-07-25 01:29:19.042670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.578 [2024-07-25 01:29:19.042687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.578 qpair failed and we were unable to recover it. 00:28:56.578 [2024-07-25 01:29:19.043223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.578 [2024-07-25 01:29:19.043240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.578 qpair failed and we were unable to recover it. 00:28:56.578 [2024-07-25 01:29:19.043764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.578 [2024-07-25 01:29:19.043780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.578 qpair failed and we were unable to recover it. 00:28:56.578 [2024-07-25 01:29:19.044231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.578 [2024-07-25 01:29:19.044248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.578 qpair failed and we were unable to recover it. 00:28:56.578 [2024-07-25 01:29:19.044802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.578 [2024-07-25 01:29:19.044821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.578 qpair failed and we were unable to recover it. 00:28:56.578 [2024-07-25 01:29:19.045257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.578 [2024-07-25 01:29:19.045274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.578 qpair failed and we were unable to recover it. 00:28:56.578 [2024-07-25 01:29:19.045724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.578 [2024-07-25 01:29:19.045740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.578 qpair failed and we were unable to recover it. 00:28:56.578 [2024-07-25 01:29:19.046239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.578 [2024-07-25 01:29:19.046256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.578 qpair failed and we were unable to recover it. 00:28:56.578 [2024-07-25 01:29:19.046673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.578 [2024-07-25 01:29:19.046689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.578 qpair failed and we were unable to recover it. 00:28:56.578 [2024-07-25 01:29:19.047188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.578 [2024-07-25 01:29:19.047204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.578 qpair failed and we were unable to recover it. 00:28:56.849 [2024-07-25 01:29:19.047694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.849 [2024-07-25 01:29:19.047713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.849 qpair failed and we were unable to recover it. 00:28:56.849 [2024-07-25 01:29:19.048219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.849 [2024-07-25 01:29:19.048239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.849 qpair failed and we were unable to recover it. 00:28:56.849 [2024-07-25 01:29:19.048755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.849 [2024-07-25 01:29:19.048772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.849 qpair failed and we were unable to recover it. 00:28:56.849 [2024-07-25 01:29:19.049250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.849 [2024-07-25 01:29:19.049268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.849 qpair failed and we were unable to recover it. 00:28:56.849 [2024-07-25 01:29:19.049691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.849 [2024-07-25 01:29:19.049707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.849 qpair failed and we were unable to recover it. 00:28:56.849 [2024-07-25 01:29:19.050138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.849 [2024-07-25 01:29:19.050154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.849 qpair failed and we were unable to recover it. 00:28:56.849 [2024-07-25 01:29:19.050661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.849 [2024-07-25 01:29:19.050677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.849 qpair failed and we were unable to recover it. 00:28:56.849 [2024-07-25 01:29:19.051117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.849 [2024-07-25 01:29:19.051133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.849 qpair failed and we were unable to recover it. 00:28:56.849 [2024-07-25 01:29:19.051646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.849 [2024-07-25 01:29:19.051662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.849 qpair failed and we were unable to recover it. 00:28:56.849 [2024-07-25 01:29:19.052214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.849 [2024-07-25 01:29:19.052232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.849 qpair failed and we were unable to recover it. 00:28:56.849 [2024-07-25 01:29:19.052655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.849 [2024-07-25 01:29:19.052670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.849 qpair failed and we were unable to recover it. 00:28:56.849 [2024-07-25 01:29:19.053173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.849 [2024-07-25 01:29:19.053189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.849 qpair failed and we were unable to recover it. 00:28:56.849 [2024-07-25 01:29:19.053634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.849 [2024-07-25 01:29:19.053650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.849 qpair failed and we were unable to recover it. 00:28:56.849 [2024-07-25 01:29:19.054152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.849 [2024-07-25 01:29:19.054170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.849 qpair failed and we were unable to recover it. 00:28:56.849 [2024-07-25 01:29:19.054697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.849 [2024-07-25 01:29:19.054713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.849 qpair failed and we were unable to recover it. 00:28:56.849 [2024-07-25 01:29:19.055203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.849 [2024-07-25 01:29:19.055220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.849 qpair failed and we were unable to recover it. 00:28:56.849 [2024-07-25 01:29:19.055728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.849 [2024-07-25 01:29:19.055745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.849 qpair failed and we were unable to recover it. 00:28:56.849 [2024-07-25 01:29:19.056178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.849 [2024-07-25 01:29:19.056195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.849 qpair failed and we were unable to recover it. 00:28:56.849 [2024-07-25 01:29:19.056671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.849 [2024-07-25 01:29:19.056687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.849 qpair failed and we were unable to recover it. 00:28:56.849 [2024-07-25 01:29:19.057108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.849 [2024-07-25 01:29:19.057125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.849 qpair failed and we were unable to recover it. 00:28:56.850 [2024-07-25 01:29:19.057662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.850 [2024-07-25 01:29:19.057678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.850 qpair failed and we were unable to recover it. 00:28:56.850 [2024-07-25 01:29:19.058205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.850 [2024-07-25 01:29:19.058222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.850 qpair failed and we were unable to recover it. 00:28:56.850 [2024-07-25 01:29:19.058726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.850 [2024-07-25 01:29:19.058743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.850 qpair failed and we were unable to recover it. 00:28:56.850 [2024-07-25 01:29:19.059287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.850 [2024-07-25 01:29:19.059304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.850 qpair failed and we were unable to recover it. 00:28:56.850 [2024-07-25 01:29:19.059803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.850 [2024-07-25 01:29:19.059819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.850 qpair failed and we were unable to recover it. 00:28:56.850 [2024-07-25 01:29:19.060352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.850 [2024-07-25 01:29:19.060369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.850 qpair failed and we were unable to recover it. 00:28:56.850 [2024-07-25 01:29:19.060862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.850 [2024-07-25 01:29:19.060879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.850 qpair failed and we were unable to recover it. 00:28:56.850 [2024-07-25 01:29:19.061385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.850 [2024-07-25 01:29:19.061402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.850 qpair failed and we were unable to recover it. 00:28:56.850 [2024-07-25 01:29:19.061919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.850 [2024-07-25 01:29:19.061935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.850 qpair failed and we were unable to recover it. 00:28:56.850 [2024-07-25 01:29:19.062460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.850 [2024-07-25 01:29:19.062478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.850 qpair failed and we were unable to recover it. 00:28:56.850 [2024-07-25 01:29:19.062984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.850 [2024-07-25 01:29:19.063000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.850 qpair failed and we were unable to recover it. 00:28:56.850 [2024-07-25 01:29:19.063529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.850 [2024-07-25 01:29:19.063546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.850 qpair failed and we were unable to recover it. 00:28:56.850 [2024-07-25 01:29:19.064067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.850 [2024-07-25 01:29:19.064084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.850 qpair failed and we were unable to recover it. 00:28:56.850 [2024-07-25 01:29:19.064588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.850 [2024-07-25 01:29:19.064605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.850 qpair failed and we were unable to recover it. 00:28:56.850 [2024-07-25 01:29:19.065071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.850 [2024-07-25 01:29:19.065091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.850 qpair failed and we were unable to recover it. 00:28:56.850 [2024-07-25 01:29:19.065595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.850 [2024-07-25 01:29:19.065612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.850 qpair failed and we were unable to recover it. 00:28:56.850 [2024-07-25 01:29:19.066082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.850 [2024-07-25 01:29:19.066100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.850 qpair failed and we were unable to recover it. 00:28:56.850 [2024-07-25 01:29:19.066471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.850 [2024-07-25 01:29:19.066487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.850 qpair failed and we were unable to recover it. 00:28:56.850 [2024-07-25 01:29:19.066987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.850 [2024-07-25 01:29:19.067003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.850 qpair failed and we were unable to recover it. 00:28:56.850 [2024-07-25 01:29:19.067518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.850 [2024-07-25 01:29:19.067535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.850 qpair failed and we were unable to recover it. 00:28:56.850 [2024-07-25 01:29:19.068024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.850 [2024-07-25 01:29:19.068040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.850 qpair failed and we were unable to recover it. 00:28:56.850 [2024-07-25 01:29:19.068598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.850 [2024-07-25 01:29:19.068615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.850 qpair failed and we were unable to recover it. 00:28:56.850 [2024-07-25 01:29:19.069114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.850 [2024-07-25 01:29:19.069131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.850 qpair failed and we were unable to recover it. 00:28:56.850 [2024-07-25 01:29:19.069640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.850 [2024-07-25 01:29:19.069656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.850 qpair failed and we were unable to recover it. 00:28:56.850 [2024-07-25 01:29:19.070087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.850 [2024-07-25 01:29:19.070104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.850 qpair failed and we were unable to recover it. 00:28:56.850 [2024-07-25 01:29:19.070607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.850 [2024-07-25 01:29:19.070623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.850 qpair failed and we were unable to recover it. 00:28:56.850 [2024-07-25 01:29:19.071173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.850 [2024-07-25 01:29:19.071190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.850 qpair failed and we were unable to recover it. 00:28:56.850 [2024-07-25 01:29:19.071689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.850 [2024-07-25 01:29:19.071706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.850 qpair failed and we were unable to recover it. 00:28:56.850 [2024-07-25 01:29:19.072077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.850 [2024-07-25 01:29:19.072095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.850 qpair failed and we were unable to recover it. 00:28:56.850 [2024-07-25 01:29:19.072607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.850 [2024-07-25 01:29:19.072625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.850 qpair failed and we were unable to recover it. 00:28:56.850 [2024-07-25 01:29:19.073133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.850 [2024-07-25 01:29:19.073150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.850 qpair failed and we were unable to recover it. 00:28:56.850 [2024-07-25 01:29:19.073641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.850 [2024-07-25 01:29:19.073658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.850 qpair failed and we were unable to recover it. 00:28:56.850 [2024-07-25 01:29:19.074129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.850 [2024-07-25 01:29:19.074146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.850 qpair failed and we were unable to recover it. 00:28:56.850 [2024-07-25 01:29:19.074575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.850 [2024-07-25 01:29:19.074591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.850 qpair failed and we were unable to recover it. 00:28:56.850 [2024-07-25 01:29:19.075072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.850 [2024-07-25 01:29:19.075089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.850 qpair failed and we were unable to recover it. 00:28:56.850 [2024-07-25 01:29:19.075595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.850 [2024-07-25 01:29:19.075612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.850 qpair failed and we were unable to recover it. 00:28:56.850 [2024-07-25 01:29:19.076164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.850 [2024-07-25 01:29:19.076181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.850 qpair failed and we were unable to recover it. 00:28:56.851 [2024-07-25 01:29:19.076676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.851 [2024-07-25 01:29:19.076693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.851 qpair failed and we were unable to recover it. 00:28:56.851 [2024-07-25 01:29:19.077184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.851 [2024-07-25 01:29:19.077201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.851 qpair failed and we were unable to recover it. 00:28:56.851 [2024-07-25 01:29:19.077729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.851 [2024-07-25 01:29:19.077745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.851 qpair failed and we were unable to recover it. 00:28:56.851 [2024-07-25 01:29:19.078169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.851 [2024-07-25 01:29:19.078186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.851 qpair failed and we were unable to recover it. 00:28:56.851 [2024-07-25 01:29:19.078689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.851 [2024-07-25 01:29:19.078706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.851 qpair failed and we were unable to recover it. 00:28:56.851 [2024-07-25 01:29:19.079122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.851 [2024-07-25 01:29:19.079137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.851 qpair failed and we were unable to recover it. 00:28:56.851 [2024-07-25 01:29:19.079616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.851 [2024-07-25 01:29:19.079632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.851 qpair failed and we were unable to recover it. 00:28:56.851 [2024-07-25 01:29:19.080159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.851 [2024-07-25 01:29:19.080176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.851 qpair failed and we were unable to recover it. 00:28:56.851 [2024-07-25 01:29:19.080663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.851 [2024-07-25 01:29:19.080680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.851 qpair failed and we were unable to recover it. 00:28:56.851 [2024-07-25 01:29:19.081185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.851 [2024-07-25 01:29:19.081203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.851 qpair failed and we were unable to recover it. 00:28:56.851 [2024-07-25 01:29:19.081760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.851 [2024-07-25 01:29:19.081776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.851 qpair failed and we were unable to recover it. 00:28:56.851 [2024-07-25 01:29:19.082262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.851 [2024-07-25 01:29:19.082278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.851 qpair failed and we were unable to recover it. 00:28:56.851 [2024-07-25 01:29:19.082791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.851 [2024-07-25 01:29:19.082807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.851 qpair failed and we were unable to recover it. 00:28:56.851 [2024-07-25 01:29:19.083344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.851 [2024-07-25 01:29:19.083360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.851 qpair failed and we were unable to recover it. 00:28:56.851 [2024-07-25 01:29:19.083868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.851 [2024-07-25 01:29:19.083899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.851 qpair failed and we were unable to recover it. 00:28:56.851 [2024-07-25 01:29:19.084476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.851 [2024-07-25 01:29:19.084493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.851 qpair failed and we were unable to recover it. 00:28:56.851 [2024-07-25 01:29:19.084949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.851 [2024-07-25 01:29:19.084965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.851 qpair failed and we were unable to recover it. 00:28:56.851 [2024-07-25 01:29:19.085465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.851 [2024-07-25 01:29:19.085489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.851 qpair failed and we were unable to recover it. 00:28:56.851 [2024-07-25 01:29:19.085913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.851 [2024-07-25 01:29:19.085929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.851 qpair failed and we were unable to recover it. 00:28:56.851 [2024-07-25 01:29:19.086428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.851 [2024-07-25 01:29:19.086445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.851 qpair failed and we were unable to recover it. 00:28:56.851 [2024-07-25 01:29:19.087000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.851 [2024-07-25 01:29:19.087016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.851 qpair failed and we were unable to recover it. 00:28:56.851 [2024-07-25 01:29:19.087510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.851 [2024-07-25 01:29:19.087526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.851 qpair failed and we were unable to recover it. 00:28:56.851 [2024-07-25 01:29:19.088027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.851 [2024-07-25 01:29:19.088046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.851 qpair failed and we were unable to recover it. 00:28:56.851 [2024-07-25 01:29:19.088491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.851 [2024-07-25 01:29:19.088509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.851 qpair failed and we were unable to recover it. 00:28:56.851 [2024-07-25 01:29:19.089016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.851 [2024-07-25 01:29:19.089032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.851 qpair failed and we were unable to recover it. 00:28:56.851 [2024-07-25 01:29:19.089551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.851 [2024-07-25 01:29:19.089568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.851 qpair failed and we were unable to recover it. 00:28:56.851 [2024-07-25 01:29:19.090053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.851 [2024-07-25 01:29:19.090069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.851 qpair failed and we were unable to recover it. 00:28:56.851 [2024-07-25 01:29:19.090589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.851 [2024-07-25 01:29:19.090606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.851 qpair failed and we were unable to recover it. 00:28:56.851 [2024-07-25 01:29:19.091142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.851 [2024-07-25 01:29:19.091160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.851 qpair failed and we were unable to recover it. 00:28:56.851 [2024-07-25 01:29:19.091584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.851 [2024-07-25 01:29:19.091601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.851 qpair failed and we were unable to recover it. 00:28:56.851 [2024-07-25 01:29:19.092041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.851 [2024-07-25 01:29:19.092063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.851 qpair failed and we were unable to recover it. 00:28:56.851 [2024-07-25 01:29:19.092560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.851 [2024-07-25 01:29:19.092576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.851 qpair failed and we were unable to recover it. 00:28:56.851 [2024-07-25 01:29:19.093079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.851 [2024-07-25 01:29:19.093096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.851 qpair failed and we were unable to recover it. 00:28:56.851 [2024-07-25 01:29:19.093446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.851 [2024-07-25 01:29:19.093460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.851 qpair failed and we were unable to recover it. 00:28:56.851 [2024-07-25 01:29:19.093882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.851 [2024-07-25 01:29:19.093898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.851 qpair failed and we were unable to recover it. 00:28:56.851 [2024-07-25 01:29:19.094326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.851 [2024-07-25 01:29:19.094343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.851 qpair failed and we were unable to recover it. 00:28:56.851 [2024-07-25 01:29:19.094769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.851 [2024-07-25 01:29:19.094784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.851 qpair failed and we were unable to recover it. 00:28:56.851 [2024-07-25 01:29:19.095152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.852 [2024-07-25 01:29:19.095169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.852 qpair failed and we were unable to recover it. 00:28:56.852 [2024-07-25 01:29:19.095670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.852 [2024-07-25 01:29:19.095686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.852 qpair failed and we were unable to recover it. 00:28:56.852 [2024-07-25 01:29:19.096230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.852 [2024-07-25 01:29:19.096246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.852 qpair failed and we were unable to recover it. 00:28:56.852 [2024-07-25 01:29:19.096751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.852 [2024-07-25 01:29:19.096767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.852 qpair failed and we were unable to recover it. 00:28:56.852 [2024-07-25 01:29:19.097312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.852 [2024-07-25 01:29:19.097329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.852 qpair failed and we were unable to recover it. 00:28:56.852 [2024-07-25 01:29:19.097826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.852 [2024-07-25 01:29:19.097842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.852 qpair failed and we were unable to recover it. 00:28:56.852 [2024-07-25 01:29:19.098324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.852 [2024-07-25 01:29:19.098341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.852 qpair failed and we were unable to recover it. 00:28:56.852 [2024-07-25 01:29:19.098786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.852 [2024-07-25 01:29:19.098802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.852 qpair failed and we were unable to recover it. 00:28:56.852 [2024-07-25 01:29:19.099281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.852 [2024-07-25 01:29:19.099298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.852 qpair failed and we were unable to recover it. 00:28:56.852 [2024-07-25 01:29:19.099776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.852 [2024-07-25 01:29:19.099792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.852 qpair failed and we were unable to recover it. 00:28:56.852 [2024-07-25 01:29:19.100318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.852 [2024-07-25 01:29:19.100334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.852 qpair failed and we were unable to recover it. 00:28:56.852 [2024-07-25 01:29:19.100779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.852 [2024-07-25 01:29:19.100795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.852 qpair failed and we were unable to recover it. 00:28:56.852 [2024-07-25 01:29:19.101226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.852 [2024-07-25 01:29:19.101243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.852 qpair failed and we were unable to recover it. 00:28:56.852 [2024-07-25 01:29:19.101743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.852 [2024-07-25 01:29:19.101759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.852 qpair failed and we were unable to recover it. 00:28:56.852 [2024-07-25 01:29:19.102257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.852 [2024-07-25 01:29:19.102274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.852 qpair failed and we were unable to recover it. 00:28:56.852 [2024-07-25 01:29:19.102697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.852 [2024-07-25 01:29:19.102713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.852 qpair failed and we were unable to recover it. 00:28:56.852 [2024-07-25 01:29:19.103215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.852 [2024-07-25 01:29:19.103232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.852 qpair failed and we were unable to recover it. 00:28:56.852 [2024-07-25 01:29:19.103783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.852 [2024-07-25 01:29:19.103799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.852 qpair failed and we were unable to recover it. 00:28:56.852 [2024-07-25 01:29:19.104297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.852 [2024-07-25 01:29:19.104314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.852 qpair failed and we were unable to recover it. 00:28:56.852 [2024-07-25 01:29:19.104743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.852 [2024-07-25 01:29:19.104759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.852 qpair failed and we were unable to recover it. 00:28:56.852 [2024-07-25 01:29:19.105257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.852 [2024-07-25 01:29:19.105276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.852 qpair failed and we were unable to recover it. 00:28:56.852 [2024-07-25 01:29:19.105799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.852 [2024-07-25 01:29:19.105815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.852 qpair failed and we were unable to recover it. 00:28:56.852 [2024-07-25 01:29:19.106309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.852 [2024-07-25 01:29:19.106326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.852 qpair failed and we were unable to recover it. 00:28:56.852 [2024-07-25 01:29:19.106745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.852 [2024-07-25 01:29:19.106761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.852 qpair failed and we were unable to recover it. 00:28:56.852 [2024-07-25 01:29:19.107184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.852 [2024-07-25 01:29:19.107201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.852 qpair failed and we were unable to recover it. 00:28:56.852 [2024-07-25 01:29:19.107689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.852 [2024-07-25 01:29:19.107705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.852 qpair failed and we were unable to recover it. 00:28:56.852 [2024-07-25 01:29:19.108208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.852 [2024-07-25 01:29:19.108224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.852 qpair failed and we were unable to recover it. 00:28:56.852 [2024-07-25 01:29:19.108705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.852 [2024-07-25 01:29:19.108721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.852 qpair failed and we were unable to recover it. 00:28:56.852 [2024-07-25 01:29:19.109200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.852 [2024-07-25 01:29:19.109216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.852 qpair failed and we were unable to recover it. 00:28:56.852 [2024-07-25 01:29:19.109739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.852 [2024-07-25 01:29:19.109755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.852 qpair failed and we were unable to recover it. 00:28:56.852 [2024-07-25 01:29:19.110235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.852 [2024-07-25 01:29:19.110252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.852 qpair failed and we were unable to recover it. 00:28:56.852 [2024-07-25 01:29:19.110679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.852 [2024-07-25 01:29:19.110695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.852 qpair failed and we were unable to recover it. 00:28:56.852 [2024-07-25 01:29:19.111195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.852 [2024-07-25 01:29:19.111212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.852 qpair failed and we were unable to recover it. 00:28:56.852 [2024-07-25 01:29:19.111679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.852 [2024-07-25 01:29:19.111695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.852 qpair failed and we were unable to recover it. 00:28:56.852 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1058327 Killed "${NVMF_APP[@]}" "$@" 00:28:56.852 [2024-07-25 01:29:19.112180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.852 [2024-07-25 01:29:19.112199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.852 qpair failed and we were unable to recover it. 00:28:56.852 [2024-07-25 01:29:19.112651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.852 [2024-07-25 01:29:19.112667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.852 qpair failed and we were unable to recover it. 00:28:56.852 01:29:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:28:56.852 [2024-07-25 01:29:19.113121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.852 [2024-07-25 01:29:19.113141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.853 qpair failed and we were unable to recover it. 00:28:56.853 01:29:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:56.853 01:29:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:56.853 [2024-07-25 01:29:19.113637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.853 [2024-07-25 01:29:19.113656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.853 qpair failed and we were unable to recover it. 00:28:56.853 01:29:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:56.853 [2024-07-25 01:29:19.114100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.853 [2024-07-25 01:29:19.114119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.853 qpair failed and we were unable to recover it. 00:28:56.853 01:29:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:56.853 [2024-07-25 01:29:19.114569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.853 [2024-07-25 01:29:19.114588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.853 qpair failed and we were unable to recover it. 00:28:56.853 [2024-07-25 01:29:19.115107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.853 [2024-07-25 01:29:19.115124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.853 qpair failed and we were unable to recover it. 00:28:56.853 [2024-07-25 01:29:19.115559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.853 [2024-07-25 01:29:19.115576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.853 qpair failed and we were unable to recover it. 00:28:56.853 [2024-07-25 01:29:19.115999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.853 [2024-07-25 01:29:19.116015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.853 qpair failed and we were unable to recover it. 00:28:56.853 [2024-07-25 01:29:19.116582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.853 [2024-07-25 01:29:19.116602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.853 qpair failed and we were unable to recover it. 00:28:56.853 [2024-07-25 01:29:19.117099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.853 [2024-07-25 01:29:19.117116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.853 qpair failed and we were unable to recover it. 00:28:56.853 [2024-07-25 01:29:19.117567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.853 [2024-07-25 01:29:19.117584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.853 qpair failed and we were unable to recover it. 00:28:56.853 [2024-07-25 01:29:19.118139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.853 [2024-07-25 01:29:19.118158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.853 qpair failed and we were unable to recover it. 00:28:56.853 [2024-07-25 01:29:19.118587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.853 [2024-07-25 01:29:19.118609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.853 qpair failed and we were unable to recover it. 00:28:56.853 [2024-07-25 01:29:19.119038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.853 [2024-07-25 01:29:19.119062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.853 qpair failed and we were unable to recover it. 00:28:56.853 [2024-07-25 01:29:19.119569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.853 [2024-07-25 01:29:19.119585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.853 qpair failed and we were unable to recover it. 00:28:56.853 [2024-07-25 01:29:19.120085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.853 [2024-07-25 01:29:19.120102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.853 qpair failed and we were unable to recover it. 00:28:56.853 [2024-07-25 01:29:19.120649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.853 [2024-07-25 01:29:19.120665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.853 qpair failed and we were unable to recover it. 00:28:56.853 [2024-07-25 01:29:19.121197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.853 [2024-07-25 01:29:19.121214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.853 qpair failed and we were unable to recover it. 00:28:56.853 01:29:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1059126 00:28:56.853 [2024-07-25 01:29:19.121724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.853 [2024-07-25 01:29:19.121744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.853 qpair failed and we were unable to recover it. 00:28:56.853 01:29:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1059126 00:28:56.853 01:29:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:56.853 01:29:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1059126 ']' 00:28:56.853 [2024-07-25 01:29:19.122255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.853 [2024-07-25 01:29:19.122275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.853 qpair failed and we were unable to recover it. 00:28:56.853 01:29:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:56.853 [2024-07-25 01:29:19.122946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.853 [2024-07-25 01:29:19.122965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.853 qpair failed and we were unable to recover it. 00:28:56.853 01:29:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:56.853 01:29:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:56.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:56.853 [2024-07-25 01:29:19.123479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.853 [2024-07-25 01:29:19.123499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.853 qpair failed and we were unable to recover it. 00:28:56.853 01:29:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:56.853 01:29:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:56.853 [2024-07-25 01:29:19.124026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.853 [2024-07-25 01:29:19.124052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.853 qpair failed and we were unable to recover it. 00:28:56.853 [2024-07-25 01:29:19.124519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.853 [2024-07-25 01:29:19.124536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.853 qpair failed and we were unable to recover it. 00:28:56.853 [2024-07-25 01:29:19.124965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.853 [2024-07-25 01:29:19.124982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.853 qpair failed and we were unable to recover it. 00:28:56.853 [2024-07-25 01:29:19.125408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.853 [2024-07-25 01:29:19.125427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.853 qpair failed and we were unable to recover it. 00:28:56.853 [2024-07-25 01:29:19.125913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.853 [2024-07-25 01:29:19.125931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.853 qpair failed and we were unable to recover it. 00:28:56.853 [2024-07-25 01:29:19.126422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.853 [2024-07-25 01:29:19.126440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.853 qpair failed and we were unable to recover it. 00:28:56.853 [2024-07-25 01:29:19.126914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.853 [2024-07-25 01:29:19.126931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.853 qpair failed and we were unable to recover it. 00:28:56.853 [2024-07-25 01:29:19.127394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.853 [2024-07-25 01:29:19.127411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.853 qpair failed and we were unable to recover it. 00:28:56.853 [2024-07-25 01:29:19.127915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.853 [2024-07-25 01:29:19.127932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.853 qpair failed and we were unable to recover it. 00:28:56.853 [2024-07-25 01:29:19.128367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.853 [2024-07-25 01:29:19.128384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.853 qpair failed and we were unable to recover it. 00:28:56.853 [2024-07-25 01:29:19.128865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.853 [2024-07-25 01:29:19.128884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.853 qpair failed and we were unable to recover it. 00:28:56.853 [2024-07-25 01:29:19.129333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.854 [2024-07-25 01:29:19.129353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.854 qpair failed and we were unable to recover it. 00:28:56.854 [2024-07-25 01:29:19.129853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.854 [2024-07-25 01:29:19.129870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.854 qpair failed and we were unable to recover it. 00:28:56.854 [2024-07-25 01:29:19.130261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.854 [2024-07-25 01:29:19.130279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.854 qpair failed and we were unable to recover it. 00:28:56.854 [2024-07-25 01:29:19.130710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.854 [2024-07-25 01:29:19.130726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.854 qpair failed and we were unable to recover it. 00:28:56.854 [2024-07-25 01:29:19.131206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.854 [2024-07-25 01:29:19.131225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.854 qpair failed and we were unable to recover it. 00:28:56.854 [2024-07-25 01:29:19.131729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.854 [2024-07-25 01:29:19.131745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.854 qpair failed and we were unable to recover it. 00:28:56.854 [2024-07-25 01:29:19.132249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.854 [2024-07-25 01:29:19.132267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.854 qpair failed and we were unable to recover it. 00:28:56.854 [2024-07-25 01:29:19.132705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.854 [2024-07-25 01:29:19.132722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.854 qpair failed and we were unable to recover it. 00:28:56.854 [2024-07-25 01:29:19.133163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.854 [2024-07-25 01:29:19.133180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.854 qpair failed and we were unable to recover it. 00:28:56.854 [2024-07-25 01:29:19.133608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.854 [2024-07-25 01:29:19.133624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.854 qpair failed and we were unable to recover it. 00:28:56.854 [2024-07-25 01:29:19.134314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.854 [2024-07-25 01:29:19.134334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.854 qpair failed and we were unable to recover it. 00:28:56.854 [2024-07-25 01:29:19.134792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.854 [2024-07-25 01:29:19.134808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.854 qpair failed and we were unable to recover it. 00:28:56.854 [2024-07-25 01:29:19.135315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.854 [2024-07-25 01:29:19.135333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.854 qpair failed and we were unable to recover it. 00:28:56.854 [2024-07-25 01:29:19.135777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.854 [2024-07-25 01:29:19.135793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.854 qpair failed and we were unable to recover it. 00:28:56.854 [2024-07-25 01:29:19.136161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.854 [2024-07-25 01:29:19.136178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.854 qpair failed and we were unable to recover it. 00:28:56.854 [2024-07-25 01:29:19.136610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.854 [2024-07-25 01:29:19.136626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.854 qpair failed and we were unable to recover it. 00:28:56.854 [2024-07-25 01:29:19.137130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.854 [2024-07-25 01:29:19.137147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.854 qpair failed and we were unable to recover it. 00:28:56.854 [2024-07-25 01:29:19.137657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.854 [2024-07-25 01:29:19.137673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.854 qpair failed and we were unable to recover it. 00:28:56.854 [2024-07-25 01:29:19.138352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.854 [2024-07-25 01:29:19.138368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.854 qpair failed and we were unable to recover it. 00:28:56.854 [2024-07-25 01:29:19.138874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.854 [2024-07-25 01:29:19.138891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.854 qpair failed and we were unable to recover it. 00:28:56.854 [2024-07-25 01:29:19.139386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.854 [2024-07-25 01:29:19.139403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.854 qpair failed and we were unable to recover it. 00:28:56.854 [2024-07-25 01:29:19.139884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.854 [2024-07-25 01:29:19.139900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.854 qpair failed and we were unable to recover it. 00:28:56.854 [2024-07-25 01:29:19.140406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.854 [2024-07-25 01:29:19.140424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.854 qpair failed and we were unable to recover it. 00:28:56.854 [2024-07-25 01:29:19.140878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.854 [2024-07-25 01:29:19.140894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.854 qpair failed and we were unable to recover it. 00:28:56.854 [2024-07-25 01:29:19.141426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.854 [2024-07-25 01:29:19.141443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.854 qpair failed and we were unable to recover it. 00:28:56.854 [2024-07-25 01:29:19.141876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.854 [2024-07-25 01:29:19.141892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.854 qpair failed and we were unable to recover it. 00:28:56.854 [2024-07-25 01:29:19.142336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.854 [2024-07-25 01:29:19.142354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.854 qpair failed and we were unable to recover it. 00:28:56.854 [2024-07-25 01:29:19.142781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.854 [2024-07-25 01:29:19.142797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.854 qpair failed and we were unable to recover it. 00:28:56.854 [2024-07-25 01:29:19.143250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.854 [2024-07-25 01:29:19.143269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.854 qpair failed and we were unable to recover it. 00:28:56.854 [2024-07-25 01:29:19.143649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.854 [2024-07-25 01:29:19.143666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.854 qpair failed and we were unable to recover it. 00:28:56.854 [2024-07-25 01:29:19.144160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.854 [2024-07-25 01:29:19.144178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.854 qpair failed and we were unable to recover it. 00:28:56.855 [2024-07-25 01:29:19.144597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.855 [2024-07-25 01:29:19.144613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.855 qpair failed and we were unable to recover it. 00:28:56.855 [2024-07-25 01:29:19.145052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.855 [2024-07-25 01:29:19.145068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.855 qpair failed and we were unable to recover it. 00:28:56.855 [2024-07-25 01:29:19.145588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.855 [2024-07-25 01:29:19.145604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.855 qpair failed and we were unable to recover it. 00:28:56.855 [2024-07-25 01:29:19.146088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.855 [2024-07-25 01:29:19.146105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.855 qpair failed and we were unable to recover it. 00:28:56.855 [2024-07-25 01:29:19.146782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.855 [2024-07-25 01:29:19.146798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.855 qpair failed and we were unable to recover it. 00:28:56.855 [2024-07-25 01:29:19.147232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.855 [2024-07-25 01:29:19.147249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.855 qpair failed and we were unable to recover it. 00:28:56.855 [2024-07-25 01:29:19.147730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.855 [2024-07-25 01:29:19.147746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.855 qpair failed and we were unable to recover it. 00:28:56.855 [2024-07-25 01:29:19.148129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.855 [2024-07-25 01:29:19.148145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.855 qpair failed and we were unable to recover it. 00:28:56.855 [2024-07-25 01:29:19.148516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.855 [2024-07-25 01:29:19.148533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.855 qpair failed and we were unable to recover it. 00:28:56.855 [2024-07-25 01:29:19.148959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.855 [2024-07-25 01:29:19.148976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.855 qpair failed and we were unable to recover it. 00:28:56.855 [2024-07-25 01:29:19.149297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.855 [2024-07-25 01:29:19.149314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.855 qpair failed and we were unable to recover it. 00:28:56.855 [2024-07-25 01:29:19.149730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.855 [2024-07-25 01:29:19.149746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.855 qpair failed and we were unable to recover it. 00:28:56.855 [2024-07-25 01:29:19.150249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.855 [2024-07-25 01:29:19.150266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.855 qpair failed and we were unable to recover it. 00:28:56.855 [2024-07-25 01:29:19.150653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.855 [2024-07-25 01:29:19.150669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.855 qpair failed and we were unable to recover it. 00:28:56.855 [2024-07-25 01:29:19.151034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.855 [2024-07-25 01:29:19.151060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.855 qpair failed and we were unable to recover it. 00:28:56.855 [2024-07-25 01:29:19.151517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.855 [2024-07-25 01:29:19.151534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.855 qpair failed and we were unable to recover it. 00:28:56.855 [2024-07-25 01:29:19.151991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.855 [2024-07-25 01:29:19.152007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.855 qpair failed and we were unable to recover it. 00:28:56.855 [2024-07-25 01:29:19.152467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.855 [2024-07-25 01:29:19.152484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.855 qpair failed and we were unable to recover it. 00:28:56.855 [2024-07-25 01:29:19.152962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.855 [2024-07-25 01:29:19.152980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.855 qpair failed and we were unable to recover it. 00:28:56.855 [2024-07-25 01:29:19.153359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.855 [2024-07-25 01:29:19.153376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.855 qpair failed and we were unable to recover it. 00:28:56.855 [2024-07-25 01:29:19.153867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.855 [2024-07-25 01:29:19.153883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.855 qpair failed and we were unable to recover it. 00:28:56.855 [2024-07-25 01:29:19.154301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.855 [2024-07-25 01:29:19.154321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.855 qpair failed and we were unable to recover it. 00:28:56.855 [2024-07-25 01:29:19.154699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.855 [2024-07-25 01:29:19.154716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.855 qpair failed and we were unable to recover it. 00:28:56.855 [2024-07-25 01:29:19.155087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.855 [2024-07-25 01:29:19.155106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.855 qpair failed and we were unable to recover it. 00:28:56.855 [2024-07-25 01:29:19.155527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.855 [2024-07-25 01:29:19.155543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.855 qpair failed and we were unable to recover it. 00:28:56.855 [2024-07-25 01:29:19.155805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.855 [2024-07-25 01:29:19.155821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.855 qpair failed and we were unable to recover it. 00:28:56.855 [2024-07-25 01:29:19.156236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.855 [2024-07-25 01:29:19.156253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.855 qpair failed and we were unable to recover it. 00:28:56.855 [2024-07-25 01:29:19.156662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.855 [2024-07-25 01:29:19.156679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.855 qpair failed and we were unable to recover it. 00:28:56.855 [2024-07-25 01:29:19.157032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.855 [2024-07-25 01:29:19.157058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.855 qpair failed and we were unable to recover it. 00:28:56.855 [2024-07-25 01:29:19.157564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.855 [2024-07-25 01:29:19.157581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.855 qpair failed and we were unable to recover it. 00:28:56.855 [2024-07-25 01:29:19.157991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.855 [2024-07-25 01:29:19.158007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.855 qpair failed and we were unable to recover it. 00:28:56.855 [2024-07-25 01:29:19.158556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.855 [2024-07-25 01:29:19.158574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.855 qpair failed and we were unable to recover it. 00:28:56.855 [2024-07-25 01:29:19.159056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.855 [2024-07-25 01:29:19.159073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.855 qpair failed and we were unable to recover it. 00:28:56.855 [2024-07-25 01:29:19.159495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.855 [2024-07-25 01:29:19.159512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.855 qpair failed and we were unable to recover it. 00:28:56.855 [2024-07-25 01:29:19.159993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.855 [2024-07-25 01:29:19.160009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.855 qpair failed and we were unable to recover it. 00:28:56.855 [2024-07-25 01:29:19.160383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.855 [2024-07-25 01:29:19.160399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.855 qpair failed and we were unable to recover it. 00:28:56.855 [2024-07-25 01:29:19.160765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.855 [2024-07-25 01:29:19.160781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.855 qpair failed and we were unable to recover it. 00:28:56.855 [2024-07-25 01:29:19.161211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.856 [2024-07-25 01:29:19.161228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.856 qpair failed and we were unable to recover it. 00:28:56.856 [2024-07-25 01:29:19.161643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.856 [2024-07-25 01:29:19.161659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.856 qpair failed and we were unable to recover it. 00:28:56.856 [2024-07-25 01:29:19.162374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.856 [2024-07-25 01:29:19.162392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.856 qpair failed and we were unable to recover it. 00:28:56.856 [2024-07-25 01:29:19.162743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.856 [2024-07-25 01:29:19.162759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.856 qpair failed and we were unable to recover it. 00:28:56.856 [2024-07-25 01:29:19.163258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.856 [2024-07-25 01:29:19.163274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.856 qpair failed and we were unable to recover it. 00:28:56.856 [2024-07-25 01:29:19.163689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.856 [2024-07-25 01:29:19.163705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.856 qpair failed and we were unable to recover it. 00:28:56.856 [2024-07-25 01:29:19.164209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.856 [2024-07-25 01:29:19.164226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.856 qpair failed and we were unable to recover it. 00:28:56.856 [2024-07-25 01:29:19.164647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.856 [2024-07-25 01:29:19.164663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.856 qpair failed and we were unable to recover it. 00:28:56.856 [2024-07-25 01:29:19.165077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.856 [2024-07-25 01:29:19.165095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.856 qpair failed and we were unable to recover it. 00:28:56.856 [2024-07-25 01:29:19.165564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.856 [2024-07-25 01:29:19.165581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.856 qpair failed and we were unable to recover it. 00:28:56.856 [2024-07-25 01:29:19.166086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.856 [2024-07-25 01:29:19.166102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.856 qpair failed and we were unable to recover it. 00:28:56.856 [2024-07-25 01:29:19.166532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.856 [2024-07-25 01:29:19.166548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.856 qpair failed and we were unable to recover it. 00:28:56.856 [2024-07-25 01:29:19.166982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.856 [2024-07-25 01:29:19.166999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.856 qpair failed and we were unable to recover it. 00:28:56.856 [2024-07-25 01:29:19.167203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.856 [2024-07-25 01:29:19.167219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.856 qpair failed and we were unable to recover it. 00:28:56.856 [2024-07-25 01:29:19.167648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.856 [2024-07-25 01:29:19.167664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.856 qpair failed and we were unable to recover it. 00:28:56.856 [2024-07-25 01:29:19.168150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.856 [2024-07-25 01:29:19.168167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.856 qpair failed and we were unable to recover it. 00:28:56.856 [2024-07-25 01:29:19.168610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.856 [2024-07-25 01:29:19.168626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.856 qpair failed and we were unable to recover it. 00:28:56.856 [2024-07-25 01:29:19.169056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.856 [2024-07-25 01:29:19.169072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.856 qpair failed and we were unable to recover it. 00:28:56.856 [2024-07-25 01:29:19.169237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.856 [2024-07-25 01:29:19.169254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.856 qpair failed and we were unable to recover it. 00:28:56.856 [2024-07-25 01:29:19.169497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.856 [2024-07-25 01:29:19.169513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.856 qpair failed and we were unable to recover it. 00:28:56.856 [2024-07-25 01:29:19.169990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.856 [2024-07-25 01:29:19.170006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.856 qpair failed and we were unable to recover it. 00:28:56.856 [2024-07-25 01:29:19.170484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.856 [2024-07-25 01:29:19.170501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.856 qpair failed and we were unable to recover it. 00:28:56.856 [2024-07-25 01:29:19.170861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.856 [2024-07-25 01:29:19.170877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.856 qpair failed and we were unable to recover it. 00:28:56.856 [2024-07-25 01:29:19.171223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.856 [2024-07-25 01:29:19.171240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.856 qpair failed and we were unable to recover it. 00:28:56.856 [2024-07-25 01:29:19.171749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.856 [2024-07-25 01:29:19.171768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.856 qpair failed and we were unable to recover it. 00:28:56.856 [2024-07-25 01:29:19.172136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.856 [2024-07-25 01:29:19.172153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.856 qpair failed and we were unable to recover it. 00:28:56.856 [2024-07-25 01:29:19.172375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.856 [2024-07-25 01:29:19.172391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.856 qpair failed and we were unable to recover it. 00:28:56.856 [2024-07-25 01:29:19.172447] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:28:56.856 [2024-07-25 01:29:19.172508] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:56.856 [2024-07-25 01:29:19.172813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.856 [2024-07-25 01:29:19.172833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.856 qpair failed and we were unable to recover it. 00:28:56.856 [2024-07-25 01:29:19.173308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.856 [2024-07-25 01:29:19.173325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.856 qpair failed and we were unable to recover it. 00:28:56.856 [2024-07-25 01:29:19.173745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.856 [2024-07-25 01:29:19.173761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.856 qpair failed and we were unable to recover it. 00:28:56.856 [2024-07-25 01:29:19.174257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.856 [2024-07-25 01:29:19.174275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.856 qpair failed and we were unable to recover it. 00:28:56.856 [2024-07-25 01:29:19.174713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.856 [2024-07-25 01:29:19.174729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.856 qpair failed and we were unable to recover it. 00:28:56.856 [2024-07-25 01:29:19.174962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.856 [2024-07-25 01:29:19.174979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.856 qpair failed and we were unable to recover it. 00:28:56.856 [2024-07-25 01:29:19.175397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.856 [2024-07-25 01:29:19.175414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.856 qpair failed and we were unable to recover it. 00:28:56.856 [2024-07-25 01:29:19.175862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.856 [2024-07-25 01:29:19.175881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.856 qpair failed and we were unable to recover it. 00:28:56.856 [2024-07-25 01:29:19.176317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.856 [2024-07-25 01:29:19.176335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.856 qpair failed and we were unable to recover it. 00:28:56.856 [2024-07-25 01:29:19.176705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.856 [2024-07-25 01:29:19.176727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.856 qpair failed and we were unable to recover it. 00:28:56.857 [2024-07-25 01:29:19.177247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.857 [2024-07-25 01:29:19.177264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.857 qpair failed and we were unable to recover it. 00:28:56.857 [2024-07-25 01:29:19.177737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.857 [2024-07-25 01:29:19.177753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.857 qpair failed and we were unable to recover it. 00:28:56.857 [2024-07-25 01:29:19.178104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.857 [2024-07-25 01:29:19.178120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.857 qpair failed and we were unable to recover it. 00:28:56.857 [2024-07-25 01:29:19.178500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.857 [2024-07-25 01:29:19.178516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.857 qpair failed and we were unable to recover it. 00:28:56.857 [2024-07-25 01:29:19.179016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.857 [2024-07-25 01:29:19.179032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.857 qpair failed and we were unable to recover it. 00:28:56.857 [2024-07-25 01:29:19.179410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.857 [2024-07-25 01:29:19.179426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.857 qpair failed and we were unable to recover it. 00:28:56.857 [2024-07-25 01:29:19.179717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.857 [2024-07-25 01:29:19.179732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.857 qpair failed and we were unable to recover it. 00:28:56.857 [2024-07-25 01:29:19.180226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.857 [2024-07-25 01:29:19.180241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.857 qpair failed and we were unable to recover it. 00:28:56.857 [2024-07-25 01:29:19.180588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.857 [2024-07-25 01:29:19.180604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.857 qpair failed and we were unable to recover it. 00:28:56.857 [2024-07-25 01:29:19.181058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.857 [2024-07-25 01:29:19.181075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.857 qpair failed and we were unable to recover it. 00:28:56.857 [2024-07-25 01:29:19.181492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.857 [2024-07-25 01:29:19.181508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.857 qpair failed and we were unable to recover it. 00:28:56.857 [2024-07-25 01:29:19.181953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.857 [2024-07-25 01:29:19.181968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.857 qpair failed and we were unable to recover it. 00:28:56.857 [2024-07-25 01:29:19.182397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.857 [2024-07-25 01:29:19.182414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.857 qpair failed and we were unable to recover it. 00:28:56.857 [2024-07-25 01:29:19.182795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.857 [2024-07-25 01:29:19.182812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.857 qpair failed and we were unable to recover it. 00:28:56.857 [2024-07-25 01:29:19.183220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.857 [2024-07-25 01:29:19.183236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.857 qpair failed and we were unable to recover it. 00:28:56.857 [2024-07-25 01:29:19.183715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.857 [2024-07-25 01:29:19.183730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.857 qpair failed and we were unable to recover it. 00:28:56.857 [2024-07-25 01:29:19.184200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.857 [2024-07-25 01:29:19.184217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.857 qpair failed and we were unable to recover it. 00:28:56.857 [2024-07-25 01:29:19.184624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.857 [2024-07-25 01:29:19.184639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.857 qpair failed and we were unable to recover it. 00:28:56.857 [2024-07-25 01:29:19.185057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.857 [2024-07-25 01:29:19.185074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.857 qpair failed and we were unable to recover it. 00:28:56.857 [2024-07-25 01:29:19.185297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.857 [2024-07-25 01:29:19.185312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.857 qpair failed and we were unable to recover it. 00:28:56.857 [2024-07-25 01:29:19.185743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.857 [2024-07-25 01:29:19.185758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.857 qpair failed and we were unable to recover it. 00:28:56.857 [2024-07-25 01:29:19.186183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.857 [2024-07-25 01:29:19.186200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.857 qpair failed and we were unable to recover it. 00:28:56.857 [2024-07-25 01:29:19.186680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.857 [2024-07-25 01:29:19.186696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.857 qpair failed and we were unable to recover it. 00:28:56.857 [2024-07-25 01:29:19.187129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.857 [2024-07-25 01:29:19.187145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.857 qpair failed and we were unable to recover it. 00:28:56.857 [2024-07-25 01:29:19.187646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.857 [2024-07-25 01:29:19.187662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.857 qpair failed and we were unable to recover it. 00:28:56.857 [2024-07-25 01:29:19.187891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.857 [2024-07-25 01:29:19.187907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.857 qpair failed and we were unable to recover it. 00:28:56.857 [2024-07-25 01:29:19.188408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.857 [2024-07-25 01:29:19.188425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.857 qpair failed and we were unable to recover it. 00:28:56.857 [2024-07-25 01:29:19.188779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.857 [2024-07-25 01:29:19.188794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.857 qpair failed and we were unable to recover it. 00:28:56.857 [2024-07-25 01:29:19.189221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.857 [2024-07-25 01:29:19.189237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.857 qpair failed and we were unable to recover it. 00:28:56.857 [2024-07-25 01:29:19.189462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.857 [2024-07-25 01:29:19.189477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.857 qpair failed and we were unable to recover it. 00:28:56.857 [2024-07-25 01:29:19.189973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.857 [2024-07-25 01:29:19.189989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.857 qpair failed and we were unable to recover it. 00:28:56.857 [2024-07-25 01:29:19.190461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.857 [2024-07-25 01:29:19.190477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.857 qpair failed and we were unable to recover it. 00:28:56.857 [2024-07-25 01:29:19.190900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.857 [2024-07-25 01:29:19.190916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.857 qpair failed and we were unable to recover it. 00:28:56.857 [2024-07-25 01:29:19.191351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.857 [2024-07-25 01:29:19.191368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.857 qpair failed and we were unable to recover it. 00:28:56.857 [2024-07-25 01:29:19.191657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.857 [2024-07-25 01:29:19.191673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.857 qpair failed and we were unable to recover it. 00:28:56.857 [2024-07-25 01:29:19.192161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.857 [2024-07-25 01:29:19.192178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.857 qpair failed and we were unable to recover it. 00:28:56.857 [2024-07-25 01:29:19.192624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.857 [2024-07-25 01:29:19.192640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.857 qpair failed and we were unable to recover it. 00:28:56.857 [2024-07-25 01:29:19.193116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.857 [2024-07-25 01:29:19.193133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.858 qpair failed and we were unable to recover it. 00:28:56.858 [2024-07-25 01:29:19.193604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.858 [2024-07-25 01:29:19.193620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.858 qpair failed and we were unable to recover it. 00:28:56.858 [2024-07-25 01:29:19.194120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.858 [2024-07-25 01:29:19.194140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.858 qpair failed and we were unable to recover it. 00:28:56.858 [2024-07-25 01:29:19.194517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.858 [2024-07-25 01:29:19.194533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.858 qpair failed and we were unable to recover it. 00:28:56.858 [2024-07-25 01:29:19.195001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.858 [2024-07-25 01:29:19.195017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.858 qpair failed and we were unable to recover it. 00:28:56.858 [2024-07-25 01:29:19.195513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.858 [2024-07-25 01:29:19.195529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.858 qpair failed and we were unable to recover it. 00:28:56.858 [2024-07-25 01:29:19.195932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.858 [2024-07-25 01:29:19.195947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.858 qpair failed and we were unable to recover it. 00:28:56.858 [2024-07-25 01:29:19.196313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.858 [2024-07-25 01:29:19.196329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.858 qpair failed and we were unable to recover it. 00:28:56.858 [2024-07-25 01:29:19.196811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.858 [2024-07-25 01:29:19.196826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.858 qpair failed and we were unable to recover it. 00:28:56.858 [2024-07-25 01:29:19.197297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.858 [2024-07-25 01:29:19.197314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.858 qpair failed and we were unable to recover it. 00:28:56.858 [2024-07-25 01:29:19.197816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.858 [2024-07-25 01:29:19.197831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.858 qpair failed and we were unable to recover it. 00:28:56.858 [2024-07-25 01:29:19.198310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.858 [2024-07-25 01:29:19.198326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.858 qpair failed and we were unable to recover it. 00:28:56.858 [2024-07-25 01:29:19.198741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.858 [2024-07-25 01:29:19.198758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.858 qpair failed and we were unable to recover it. 00:28:56.858 [2024-07-25 01:29:19.199249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.858 [2024-07-25 01:29:19.199265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.858 qpair failed and we were unable to recover it. 00:28:56.858 [2024-07-25 01:29:19.199666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.858 [2024-07-25 01:29:19.199681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.858 qpair failed and we were unable to recover it. 00:28:56.858 [2024-07-25 01:29:19.200048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.858 [2024-07-25 01:29:19.200064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.858 qpair failed and we were unable to recover it. 00:28:56.858 [2024-07-25 01:29:19.200447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.858 [2024-07-25 01:29:19.200464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.858 qpair failed and we were unable to recover it. 00:28:56.858 [2024-07-25 01:29:19.200950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.858 [2024-07-25 01:29:19.200966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.858 qpair failed and we were unable to recover it. 00:28:56.858 [2024-07-25 01:29:19.201376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.858 [2024-07-25 01:29:19.201393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.858 qpair failed and we were unable to recover it. 00:28:56.858 [2024-07-25 01:29:19.201821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.858 [2024-07-25 01:29:19.201837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.858 qpair failed and we were unable to recover it. 00:28:56.858 EAL: No free 2048 kB hugepages reported on node 1 00:28:56.858 [2024-07-25 01:29:19.202306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.858 [2024-07-25 01:29:19.202324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.858 qpair failed and we were unable to recover it. 00:28:56.858 [2024-07-25 01:29:19.202738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.858 [2024-07-25 01:29:19.202752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.858 qpair failed and we were unable to recover it. 00:28:56.858 [2024-07-25 01:29:19.203178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.858 [2024-07-25 01:29:19.203194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.858 qpair failed and we were unable to recover it. 00:28:56.858 [2024-07-25 01:29:19.203565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.858 [2024-07-25 01:29:19.203597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.858 qpair failed and we were unable to recover it. 00:28:56.858 [2024-07-25 01:29:19.203792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.858 [2024-07-25 01:29:19.203823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.858 qpair failed and we were unable to recover it. 00:28:56.858 [2024-07-25 01:29:19.204279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.858 [2024-07-25 01:29:19.204311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.858 qpair failed and we were unable to recover it. 00:28:56.858 [2024-07-25 01:29:19.204759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.858 [2024-07-25 01:29:19.204801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.858 qpair failed and we were unable to recover it. 00:28:56.858 [2024-07-25 01:29:19.205270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.858 [2024-07-25 01:29:19.205286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.858 qpair failed and we were unable to recover it. 00:28:56.858 [2024-07-25 01:29:19.205708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.858 [2024-07-25 01:29:19.205724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.858 qpair failed and we were unable to recover it. 00:28:56.858 [2024-07-25 01:29:19.206124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.858 [2024-07-25 01:29:19.206140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.858 qpair failed and we were unable to recover it. 00:28:56.858 [2024-07-25 01:29:19.206502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.858 [2024-07-25 01:29:19.206518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.858 qpair failed and we were unable to recover it. 00:28:56.858 [2024-07-25 01:29:19.206927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.858 [2024-07-25 01:29:19.206943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.858 qpair failed and we were unable to recover it. 00:28:56.859 [2024-07-25 01:29:19.207401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.859 [2024-07-25 01:29:19.207417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.859 qpair failed and we were unable to recover it. 00:28:56.859 [2024-07-25 01:29:19.207927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.859 [2024-07-25 01:29:19.207942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.859 qpair failed and we were unable to recover it. 00:28:56.859 [2024-07-25 01:29:19.208310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.859 [2024-07-25 01:29:19.208326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.859 qpair failed and we were unable to recover it. 00:28:56.859 [2024-07-25 01:29:19.208830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.859 [2024-07-25 01:29:19.208845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.859 qpair failed and we were unable to recover it. 00:28:56.859 [2024-07-25 01:29:19.209279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.859 [2024-07-25 01:29:19.209295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.859 qpair failed and we were unable to recover it. 00:28:56.859 [2024-07-25 01:29:19.209648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.859 [2024-07-25 01:29:19.209663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.859 qpair failed and we were unable to recover it. 00:28:56.859 [2024-07-25 01:29:19.210110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.859 [2024-07-25 01:29:19.210126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.859 qpair failed and we were unable to recover it. 00:28:56.859 [2024-07-25 01:29:19.210484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.859 [2024-07-25 01:29:19.210500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.859 qpair failed and we were unable to recover it. 00:28:56.859 [2024-07-25 01:29:19.210992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.859 [2024-07-25 01:29:19.211007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.859 qpair failed and we were unable to recover it. 00:28:56.859 [2024-07-25 01:29:19.211371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.859 [2024-07-25 01:29:19.211386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.859 qpair failed and we were unable to recover it. 00:28:56.859 [2024-07-25 01:29:19.211787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.859 [2024-07-25 01:29:19.211805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.859 qpair failed and we were unable to recover it. 00:28:56.859 [2024-07-25 01:29:19.212294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.859 [2024-07-25 01:29:19.212310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.859 qpair failed and we were unable to recover it. 00:28:56.859 [2024-07-25 01:29:19.212777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.859 [2024-07-25 01:29:19.212792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.859 qpair failed and we were unable to recover it. 00:28:56.859 [2024-07-25 01:29:19.213259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.859 [2024-07-25 01:29:19.213274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.859 qpair failed and we were unable to recover it. 00:28:56.859 [2024-07-25 01:29:19.213691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.859 [2024-07-25 01:29:19.213708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.859 qpair failed and we were unable to recover it. 00:28:56.859 [2024-07-25 01:29:19.214055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.859 [2024-07-25 01:29:19.214071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.859 qpair failed and we were unable to recover it. 00:28:56.859 [2024-07-25 01:29:19.214563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.859 [2024-07-25 01:29:19.214578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.859 qpair failed and we were unable to recover it. 00:28:56.859 [2024-07-25 01:29:19.214980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.859 [2024-07-25 01:29:19.214996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.859 qpair failed and we were unable to recover it. 00:28:56.859 [2024-07-25 01:29:19.215153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.859 [2024-07-25 01:29:19.215169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.859 qpair failed and we were unable to recover it. 00:28:56.859 [2024-07-25 01:29:19.215649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.859 [2024-07-25 01:29:19.215664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.859 qpair failed and we were unable to recover it. 00:28:56.859 [2024-07-25 01:29:19.216012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.859 [2024-07-25 01:29:19.216027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.859 qpair failed and we were unable to recover it. 00:28:56.859 [2024-07-25 01:29:19.216373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.859 [2024-07-25 01:29:19.216389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.859 qpair failed and we were unable to recover it. 00:28:56.859 [2024-07-25 01:29:19.216798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.859 [2024-07-25 01:29:19.216813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.859 qpair failed and we were unable to recover it. 00:28:56.859 [2024-07-25 01:29:19.217236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.859 [2024-07-25 01:29:19.217252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.859 qpair failed and we were unable to recover it. 00:28:56.859 [2024-07-25 01:29:19.217688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.859 [2024-07-25 01:29:19.217703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.859 qpair failed and we were unable to recover it. 00:28:56.859 [2024-07-25 01:29:19.218172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.859 [2024-07-25 01:29:19.218187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.859 qpair failed and we were unable to recover it. 00:28:56.859 [2024-07-25 01:29:19.218487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.859 [2024-07-25 01:29:19.218501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.859 qpair failed and we were unable to recover it. 00:28:56.859 [2024-07-25 01:29:19.218988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.859 [2024-07-25 01:29:19.219003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.859 qpair failed and we were unable to recover it. 00:28:56.859 [2024-07-25 01:29:19.219349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.859 [2024-07-25 01:29:19.219364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.859 qpair failed and we were unable to recover it. 00:28:56.859 [2024-07-25 01:29:19.219859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.859 [2024-07-25 01:29:19.219874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.859 qpair failed and we were unable to recover it. 00:28:56.859 [2024-07-25 01:29:19.220364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.859 [2024-07-25 01:29:19.220379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.859 qpair failed and we were unable to recover it. 00:28:56.859 [2024-07-25 01:29:19.220798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.859 [2024-07-25 01:29:19.220813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.859 qpair failed and we were unable to recover it. 00:28:56.859 [2024-07-25 01:29:19.221242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.859 [2024-07-25 01:29:19.221259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.859 qpair failed and we were unable to recover it. 00:28:56.859 [2024-07-25 01:29:19.221753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.859 [2024-07-25 01:29:19.221768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.859 qpair failed and we were unable to recover it. 00:28:56.859 [2024-07-25 01:29:19.222555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.859 [2024-07-25 01:29:19.222580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.859 qpair failed and we were unable to recover it. 00:28:56.859 [2024-07-25 01:29:19.222753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.859 [2024-07-25 01:29:19.222768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.859 qpair failed and we were unable to recover it. 00:28:56.859 [2024-07-25 01:29:19.223111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.859 [2024-07-25 01:29:19.223127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.859 qpair failed and we were unable to recover it. 00:28:56.859 [2024-07-25 01:29:19.223547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.860 [2024-07-25 01:29:19.223562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.860 qpair failed and we were unable to recover it. 00:28:56.860 [2024-07-25 01:29:19.223984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.860 [2024-07-25 01:29:19.224000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.860 qpair failed and we were unable to recover it. 00:28:56.860 [2024-07-25 01:29:19.224660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.860 [2024-07-25 01:29:19.224676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.860 qpair failed and we were unable to recover it. 00:28:56.860 [2024-07-25 01:29:19.225095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.860 [2024-07-25 01:29:19.225111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.860 qpair failed and we were unable to recover it. 00:28:56.860 [2024-07-25 01:29:19.225545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.860 [2024-07-25 01:29:19.225561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.860 qpair failed and we were unable to recover it. 00:28:56.860 [2024-07-25 01:29:19.226055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.860 [2024-07-25 01:29:19.226073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.860 qpair failed and we were unable to recover it. 00:28:56.860 [2024-07-25 01:29:19.226519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.860 [2024-07-25 01:29:19.226535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.860 qpair failed and we were unable to recover it. 00:28:56.860 [2024-07-25 01:29:19.227056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.860 [2024-07-25 01:29:19.227072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.860 qpair failed and we were unable to recover it. 00:28:56.860 [2024-07-25 01:29:19.227504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.860 [2024-07-25 01:29:19.227520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.860 qpair failed and we were unable to recover it. 00:28:56.860 [2024-07-25 01:29:19.227921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.860 [2024-07-25 01:29:19.227936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.860 qpair failed and we were unable to recover it. 00:28:56.860 [2024-07-25 01:29:19.228351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.860 [2024-07-25 01:29:19.228367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.860 qpair failed and we were unable to recover it. 00:28:56.860 [2024-07-25 01:29:19.228630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.860 [2024-07-25 01:29:19.228646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.860 qpair failed and we were unable to recover it. 00:28:56.860 [2024-07-25 01:29:19.229111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.860 [2024-07-25 01:29:19.229127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.860 qpair failed and we were unable to recover it. 00:28:56.860 [2024-07-25 01:29:19.229555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.860 [2024-07-25 01:29:19.229573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.860 qpair failed and we were unable to recover it. 00:28:56.860 [2024-07-25 01:29:19.229971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.860 [2024-07-25 01:29:19.229985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.860 qpair failed and we were unable to recover it. 00:28:56.860 [2024-07-25 01:29:19.230386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.860 [2024-07-25 01:29:19.230404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.860 qpair failed and we were unable to recover it. 00:28:56.860 [2024-07-25 01:29:19.230871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.860 [2024-07-25 01:29:19.230885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.860 qpair failed and we were unable to recover it. 00:28:56.860 [2024-07-25 01:29:19.231377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.860 [2024-07-25 01:29:19.231393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.860 qpair failed and we were unable to recover it. 00:28:56.860 [2024-07-25 01:29:19.231816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.860 [2024-07-25 01:29:19.231831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.860 qpair failed and we were unable to recover it. 00:28:56.860 [2024-07-25 01:29:19.232567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.860 [2024-07-25 01:29:19.232583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.860 qpair failed and we were unable to recover it. 00:28:56.860 [2024-07-25 01:29:19.233029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.860 [2024-07-25 01:29:19.233055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.860 qpair failed and we were unable to recover it. 00:28:56.860 [2024-07-25 01:29:19.233562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.860 [2024-07-25 01:29:19.233576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.860 qpair failed and we were unable to recover it. 00:28:56.860 [2024-07-25 01:29:19.233995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.860 [2024-07-25 01:29:19.234010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.860 qpair failed and we were unable to recover it. 00:28:56.860 [2024-07-25 01:29:19.234499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.860 [2024-07-25 01:29:19.234515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.860 qpair failed and we were unable to recover it. 00:28:56.860 [2024-07-25 01:29:19.235011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.860 [2024-07-25 01:29:19.235026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.860 qpair failed and we were unable to recover it. 00:28:56.860 [2024-07-25 01:29:19.235439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.860 [2024-07-25 01:29:19.235453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.860 qpair failed and we were unable to recover it. 00:28:56.860 [2024-07-25 01:29:19.235866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.860 [2024-07-25 01:29:19.235881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.860 qpair failed and we were unable to recover it. 00:28:56.860 [2024-07-25 01:29:19.236256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.860 [2024-07-25 01:29:19.236272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.860 qpair failed and we were unable to recover it. 00:28:56.860 [2024-07-25 01:29:19.236616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.860 [2024-07-25 01:29:19.236630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.860 qpair failed and we were unable to recover it. 00:28:56.860 [2024-07-25 01:29:19.237056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.860 [2024-07-25 01:29:19.237071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.860 qpair failed and we were unable to recover it. 00:28:56.860 [2024-07-25 01:29:19.237518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.860 [2024-07-25 01:29:19.237532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.860 qpair failed and we were unable to recover it. 00:28:56.860 [2024-07-25 01:29:19.238122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.860 [2024-07-25 01:29:19.238137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.860 qpair failed and we were unable to recover it. 00:28:56.860 [2024-07-25 01:29:19.238570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.860 [2024-07-25 01:29:19.238584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.860 qpair failed and we were unable to recover it. 00:28:56.860 [2024-07-25 01:29:19.239053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.860 [2024-07-25 01:29:19.239068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.860 qpair failed and we were unable to recover it. 00:28:56.860 [2024-07-25 01:29:19.239428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.860 [2024-07-25 01:29:19.239442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.860 qpair failed and we were unable to recover it. 00:28:56.860 [2024-07-25 01:29:19.239950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.860 [2024-07-25 01:29:19.239964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.860 qpair failed and we were unable to recover it. 00:28:56.860 [2024-07-25 01:29:19.240406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.860 [2024-07-25 01:29:19.240421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.860 qpair failed and we were unable to recover it. 00:28:56.860 [2024-07-25 01:29:19.240915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.860 [2024-07-25 01:29:19.240931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.860 qpair failed and we were unable to recover it. 00:28:56.861 [2024-07-25 01:29:19.241299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.861 [2024-07-25 01:29:19.241315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.861 qpair failed and we were unable to recover it. 00:28:56.861 [2024-07-25 01:29:19.241670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.861 [2024-07-25 01:29:19.241684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.861 qpair failed and we were unable to recover it. 00:28:56.861 [2024-07-25 01:29:19.242117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.861 [2024-07-25 01:29:19.242132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.861 qpair failed and we were unable to recover it. 00:28:56.861 [2024-07-25 01:29:19.242545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.861 [2024-07-25 01:29:19.242560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.861 qpair failed and we were unable to recover it. 00:28:56.861 [2024-07-25 01:29:19.242968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.861 [2024-07-25 01:29:19.242983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.861 qpair failed and we were unable to recover it. 00:28:56.861 [2024-07-25 01:29:19.243412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.861 [2024-07-25 01:29:19.243427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.861 qpair failed and we were unable to recover it. 00:28:56.861 [2024-07-25 01:29:19.243779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.861 [2024-07-25 01:29:19.243793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.861 qpair failed and we were unable to recover it. 00:28:56.861 [2024-07-25 01:29:19.244236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.861 [2024-07-25 01:29:19.244251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.861 qpair failed and we were unable to recover it. 00:28:56.861 [2024-07-25 01:29:19.244787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.861 [2024-07-25 01:29:19.244801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.861 qpair failed and we were unable to recover it. 00:28:56.861 [2024-07-25 01:29:19.245193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.861 [2024-07-25 01:29:19.245209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.861 qpair failed and we were unable to recover it. 00:28:56.861 [2024-07-25 01:29:19.245675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.861 [2024-07-25 01:29:19.245689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.861 qpair failed and we were unable to recover it. 00:28:56.861 [2024-07-25 01:29:19.246203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.861 [2024-07-25 01:29:19.246219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.861 qpair failed and we were unable to recover it. 00:28:56.861 [2024-07-25 01:29:19.246628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.861 [2024-07-25 01:29:19.246643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.861 qpair failed and we were unable to recover it. 00:28:56.861 [2024-07-25 01:29:19.246890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.861 [2024-07-25 01:29:19.246904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.861 qpair failed and we were unable to recover it. 00:28:56.861 [2024-07-25 01:29:19.246927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:56.861 [2024-07-25 01:29:19.247324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.861 [2024-07-25 01:29:19.247340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.861 qpair failed and we were unable to recover it. 00:28:56.861 [2024-07-25 01:29:19.247717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.861 [2024-07-25 01:29:19.247733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.861 qpair failed and we were unable to recover it. 00:28:56.861 [2024-07-25 01:29:19.248084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.861 [2024-07-25 01:29:19.248099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.861 qpair failed and we were unable to recover it. 00:28:56.861 [2024-07-25 01:29:19.248511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.861 [2024-07-25 01:29:19.248525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.861 qpair failed and we were unable to recover it. 00:28:56.861 [2024-07-25 01:29:19.248939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.861 [2024-07-25 01:29:19.248953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.861 qpair failed and we were unable to recover it. 00:28:56.861 [2024-07-25 01:29:19.249309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.861 [2024-07-25 01:29:19.249325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.861 qpair failed and we were unable to recover it. 00:28:56.861 [2024-07-25 01:29:19.249796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.861 [2024-07-25 01:29:19.249811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.861 qpair failed and we were unable to recover it. 00:28:56.861 [2024-07-25 01:29:19.250194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.861 [2024-07-25 01:29:19.250211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.861 qpair failed and we were unable to recover it. 00:28:56.861 [2024-07-25 01:29:19.250694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.861 [2024-07-25 01:29:19.250709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.861 qpair failed and we were unable to recover it. 00:28:56.861 [2024-07-25 01:29:19.251235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.861 [2024-07-25 01:29:19.251252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.861 qpair failed and we were unable to recover it. 00:28:56.861 [2024-07-25 01:29:19.251612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.861 [2024-07-25 01:29:19.251627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.861 qpair failed and we were unable to recover it. 00:28:56.861 [2024-07-25 01:29:19.252057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.861 [2024-07-25 01:29:19.252072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.861 qpair failed and we were unable to recover it. 00:28:56.861 [2024-07-25 01:29:19.252441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.861 [2024-07-25 01:29:19.252456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.861 qpair failed and we were unable to recover it. 00:28:56.861 [2024-07-25 01:29:19.252928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.861 [2024-07-25 01:29:19.252943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.861 qpair failed and we were unable to recover it. 00:28:56.861 [2024-07-25 01:29:19.253304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.861 [2024-07-25 01:29:19.253323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.861 qpair failed and we were unable to recover it. 00:28:56.861 [2024-07-25 01:29:19.253726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.861 [2024-07-25 01:29:19.253741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.861 qpair failed and we were unable to recover it. 00:28:56.861 [2024-07-25 01:29:19.254209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.861 [2024-07-25 01:29:19.254225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.861 qpair failed and we were unable to recover it. 00:28:56.861 [2024-07-25 01:29:19.254639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.861 [2024-07-25 01:29:19.254655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.861 qpair failed and we were unable to recover it. 00:28:56.861 [2024-07-25 01:29:19.255092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.861 [2024-07-25 01:29:19.255108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.861 qpair failed and we were unable to recover it. 00:28:56.861 [2024-07-25 01:29:19.255458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.861 [2024-07-25 01:29:19.255474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.861 qpair failed and we were unable to recover it. 00:28:56.861 [2024-07-25 01:29:19.255947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.861 [2024-07-25 01:29:19.255963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.861 qpair failed and we were unable to recover it. 00:28:56.861 [2024-07-25 01:29:19.256385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.861 [2024-07-25 01:29:19.256401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.861 qpair failed and we were unable to recover it. 00:28:56.861 [2024-07-25 01:29:19.256751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.861 [2024-07-25 01:29:19.256767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.861 qpair failed and we were unable to recover it. 00:28:56.861 [2024-07-25 01:29:19.257182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.862 [2024-07-25 01:29:19.257198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.862 qpair failed and we were unable to recover it. 00:28:56.862 [2024-07-25 01:29:19.257638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.862 [2024-07-25 01:29:19.257654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.862 qpair failed and we were unable to recover it. 00:28:56.862 [2024-07-25 01:29:19.258038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.862 [2024-07-25 01:29:19.258063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.862 qpair failed and we were unable to recover it. 00:28:56.862 [2024-07-25 01:29:19.258540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.862 [2024-07-25 01:29:19.258555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.862 qpair failed and we were unable to recover it. 00:28:56.862 [2024-07-25 01:29:19.258896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.862 [2024-07-25 01:29:19.258912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.862 qpair failed and we were unable to recover it. 00:28:56.862 [2024-07-25 01:29:19.259324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.862 [2024-07-25 01:29:19.259340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.862 qpair failed and we were unable to recover it. 00:28:56.862 [2024-07-25 01:29:19.259766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.862 [2024-07-25 01:29:19.259781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.862 qpair failed and we were unable to recover it. 00:28:56.862 [2024-07-25 01:29:19.260234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.862 [2024-07-25 01:29:19.260249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.862 qpair failed and we were unable to recover it. 00:28:56.862 [2024-07-25 01:29:19.260604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.862 [2024-07-25 01:29:19.260619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.862 qpair failed and we were unable to recover it. 00:28:56.862 [2024-07-25 01:29:19.260950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.862 [2024-07-25 01:29:19.260964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.862 qpair failed and we were unable to recover it. 00:28:56.862 [2024-07-25 01:29:19.262138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.862 [2024-07-25 01:29:19.262168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.862 qpair failed and we were unable to recover it. 00:28:56.862 [2024-07-25 01:29:19.263324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.862 [2024-07-25 01:29:19.263349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.862 qpair failed and we were unable to recover it. 00:28:56.862 [2024-07-25 01:29:19.263719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.862 [2024-07-25 01:29:19.263735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.862 qpair failed and we were unable to recover it. 00:28:56.862 [2024-07-25 01:29:19.264155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.862 [2024-07-25 01:29:19.264171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.862 qpair failed and we were unable to recover it. 00:28:56.862 [2024-07-25 01:29:19.264592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.862 [2024-07-25 01:29:19.264607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.862 qpair failed and we were unable to recover it. 00:28:56.862 [2024-07-25 01:29:19.264954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.862 [2024-07-25 01:29:19.264970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.862 qpair failed and we were unable to recover it. 00:28:56.862 [2024-07-25 01:29:19.265321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.862 [2024-07-25 01:29:19.265336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.862 qpair failed and we were unable to recover it. 00:28:56.862 [2024-07-25 01:29:19.265749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.862 [2024-07-25 01:29:19.265764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.862 qpair failed and we were unable to recover it. 00:28:56.862 [2024-07-25 01:29:19.266141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.862 [2024-07-25 01:29:19.266156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.862 qpair failed and we were unable to recover it. 00:28:56.862 [2024-07-25 01:29:19.266432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.862 [2024-07-25 01:29:19.266446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.862 qpair failed and we were unable to recover it. 00:28:56.862 [2024-07-25 01:29:19.266781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.862 [2024-07-25 01:29:19.266796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.862 qpair failed and we were unable to recover it. 00:28:56.862 [2024-07-25 01:29:19.267156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.862 [2024-07-25 01:29:19.267171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.862 qpair failed and we were unable to recover it. 00:28:56.862 [2024-07-25 01:29:19.267529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.862 [2024-07-25 01:29:19.267543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.862 qpair failed and we were unable to recover it. 00:28:56.862 [2024-07-25 01:29:19.267952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.862 [2024-07-25 01:29:19.267967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.862 qpair failed and we were unable to recover it. 00:28:56.862 [2024-07-25 01:29:19.268265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.862 [2024-07-25 01:29:19.268280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.862 qpair failed and we were unable to recover it. 00:28:56.862 [2024-07-25 01:29:19.268697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.862 [2024-07-25 01:29:19.268711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.862 qpair failed and we were unable to recover it. 00:28:56.862 [2024-07-25 01:29:19.269114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.862 [2024-07-25 01:29:19.269129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.862 qpair failed and we were unable to recover it. 00:28:56.862 [2024-07-25 01:29:19.269552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.862 [2024-07-25 01:29:19.269566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.862 qpair failed and we were unable to recover it. 00:28:56.862 [2024-07-25 01:29:19.270016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.862 [2024-07-25 01:29:19.270030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.862 qpair failed and we were unable to recover it. 00:28:56.862 [2024-07-25 01:29:19.270405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.862 [2024-07-25 01:29:19.270419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.862 qpair failed and we were unable to recover it. 00:28:56.862 [2024-07-25 01:29:19.270884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.862 [2024-07-25 01:29:19.270899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.862 qpair failed and we were unable to recover it. 00:28:56.862 [2024-07-25 01:29:19.271362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.862 [2024-07-25 01:29:19.271379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.862 qpair failed and we were unable to recover it. 00:28:56.862 [2024-07-25 01:29:19.271726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.862 [2024-07-25 01:29:19.271739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.862 qpair failed and we were unable to recover it. 00:28:56.862 [2024-07-25 01:29:19.272090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.862 [2024-07-25 01:29:19.272105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.862 qpair failed and we were unable to recover it. 00:28:56.862 [2024-07-25 01:29:19.272586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.862 [2024-07-25 01:29:19.272601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.862 qpair failed and we were unable to recover it. 00:28:56.862 [2024-07-25 01:29:19.272754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.862 [2024-07-25 01:29:19.272768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.862 qpair failed and we were unable to recover it. 00:28:56.862 [2024-07-25 01:29:19.273140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.862 [2024-07-25 01:29:19.273154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.862 qpair failed and we were unable to recover it. 00:28:56.862 [2024-07-25 01:29:19.273601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.862 [2024-07-25 01:29:19.273616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.863 qpair failed and we were unable to recover it. 00:28:56.863 [2024-07-25 01:29:19.273970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.863 [2024-07-25 01:29:19.273984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.863 qpair failed and we were unable to recover it. 00:28:56.863 [2024-07-25 01:29:19.274343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.863 [2024-07-25 01:29:19.274358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.863 qpair failed and we were unable to recover it. 00:28:56.863 [2024-07-25 01:29:19.274572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.863 [2024-07-25 01:29:19.274587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.863 qpair failed and we were unable to recover it. 00:28:56.863 [2024-07-25 01:29:19.274935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.863 [2024-07-25 01:29:19.274949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.863 qpair failed and we were unable to recover it. 00:28:56.863 [2024-07-25 01:29:19.275288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.863 [2024-07-25 01:29:19.275303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.863 qpair failed and we were unable to recover it. 00:28:56.863 [2024-07-25 01:29:19.275699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.863 [2024-07-25 01:29:19.275713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.863 qpair failed and we were unable to recover it. 00:28:56.863 [2024-07-25 01:29:19.276066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.863 [2024-07-25 01:29:19.276081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.863 qpair failed and we were unable to recover it. 00:28:56.863 [2024-07-25 01:29:19.276276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.863 [2024-07-25 01:29:19.276290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.863 qpair failed and we were unable to recover it. 00:28:56.863 [2024-07-25 01:29:19.276624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.863 [2024-07-25 01:29:19.276638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.863 qpair failed and we were unable to recover it. 00:28:56.863 [2024-07-25 01:29:19.276992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.863 [2024-07-25 01:29:19.277006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.863 qpair failed and we were unable to recover it. 00:28:56.863 [2024-07-25 01:29:19.277432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.863 [2024-07-25 01:29:19.277446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.863 qpair failed and we were unable to recover it. 00:28:56.863 [2024-07-25 01:29:19.277847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.863 [2024-07-25 01:29:19.277861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.863 qpair failed and we were unable to recover it. 00:28:56.863 [2024-07-25 01:29:19.278265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.863 [2024-07-25 01:29:19.278281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.863 qpair failed and we were unable to recover it. 00:28:56.863 [2024-07-25 01:29:19.278679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.863 [2024-07-25 01:29:19.278694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.863 qpair failed and we were unable to recover it. 00:28:56.863 [2024-07-25 01:29:19.279102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.863 [2024-07-25 01:29:19.279117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.863 qpair failed and we were unable to recover it. 00:28:56.863 [2024-07-25 01:29:19.279518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.863 [2024-07-25 01:29:19.279532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.863 qpair failed and we were unable to recover it. 00:28:56.863 [2024-07-25 01:29:19.279929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.863 [2024-07-25 01:29:19.279943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.863 qpair failed and we were unable to recover it. 00:28:56.863 [2024-07-25 01:29:19.280423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.863 [2024-07-25 01:29:19.280438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.863 qpair failed and we were unable to recover it. 00:28:56.863 [2024-07-25 01:29:19.280753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.863 [2024-07-25 01:29:19.280767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.863 qpair failed and we were unable to recover it. 00:28:56.863 [2024-07-25 01:29:19.281174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.863 [2024-07-25 01:29:19.281188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.863 qpair failed and we were unable to recover it. 00:28:56.863 [2024-07-25 01:29:19.281526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.863 [2024-07-25 01:29:19.281541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.863 qpair failed and we were unable to recover it. 00:28:56.863 [2024-07-25 01:29:19.281890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.863 [2024-07-25 01:29:19.281904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.863 qpair failed and we were unable to recover it. 00:28:56.863 [2024-07-25 01:29:19.282255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.863 [2024-07-25 01:29:19.282270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.863 qpair failed and we were unable to recover it. 00:28:56.863 [2024-07-25 01:29:19.282512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.863 [2024-07-25 01:29:19.282526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.863 qpair failed and we were unable to recover it. 00:28:56.863 [2024-07-25 01:29:19.282870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.863 [2024-07-25 01:29:19.282884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.863 qpair failed and we were unable to recover it. 00:28:56.863 [2024-07-25 01:29:19.283247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.863 [2024-07-25 01:29:19.283262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.863 qpair failed and we were unable to recover it. 00:28:56.863 [2024-07-25 01:29:19.283606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.863 [2024-07-25 01:29:19.283621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.863 qpair failed and we were unable to recover it. 00:28:56.863 [2024-07-25 01:29:19.284032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.863 [2024-07-25 01:29:19.284059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.863 qpair failed and we were unable to recover it. 00:28:56.863 [2024-07-25 01:29:19.284399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.863 [2024-07-25 01:29:19.284414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.863 qpair failed and we were unable to recover it. 00:28:56.863 [2024-07-25 01:29:19.284776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.863 [2024-07-25 01:29:19.284790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.863 qpair failed and we were unable to recover it. 00:28:56.863 [2024-07-25 01:29:19.285200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.863 [2024-07-25 01:29:19.285217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.863 qpair failed and we were unable to recover it. 00:28:56.863 [2024-07-25 01:29:19.285636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.864 [2024-07-25 01:29:19.285653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.864 qpair failed and we were unable to recover it. 00:28:56.864 [2024-07-25 01:29:19.286058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.864 [2024-07-25 01:29:19.286081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.864 qpair failed and we were unable to recover it. 00:28:56.864 [2024-07-25 01:29:19.286446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.864 [2024-07-25 01:29:19.286466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.864 qpair failed and we were unable to recover it. 00:28:56.864 [2024-07-25 01:29:19.286813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.864 [2024-07-25 01:29:19.286828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.864 qpair failed and we were unable to recover it. 00:28:56.864 [2024-07-25 01:29:19.287172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.864 [2024-07-25 01:29:19.287189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.864 qpair failed and we were unable to recover it. 00:28:56.864 [2024-07-25 01:29:19.287653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.864 [2024-07-25 01:29:19.287669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.864 qpair failed and we were unable to recover it. 00:28:56.864 [2024-07-25 01:29:19.288184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.864 [2024-07-25 01:29:19.288202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.864 qpair failed and we were unable to recover it. 00:28:56.864 [2024-07-25 01:29:19.288552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.864 [2024-07-25 01:29:19.288567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.864 qpair failed and we were unable to recover it. 00:28:56.864 [2024-07-25 01:29:19.288911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.864 [2024-07-25 01:29:19.288926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.864 qpair failed and we were unable to recover it. 00:28:56.864 [2024-07-25 01:29:19.289391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.864 [2024-07-25 01:29:19.289408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.864 qpair failed and we were unable to recover it. 00:28:56.864 [2024-07-25 01:29:19.289816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.864 [2024-07-25 01:29:19.289833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.864 qpair failed and we were unable to recover it. 00:28:56.864 [2024-07-25 01:29:19.290259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.864 [2024-07-25 01:29:19.290275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.864 qpair failed and we were unable to recover it. 00:28:56.864 [2024-07-25 01:29:19.290741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.864 [2024-07-25 01:29:19.290758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.864 qpair failed and we were unable to recover it. 00:28:56.864 [2024-07-25 01:29:19.291171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.864 [2024-07-25 01:29:19.291189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.864 qpair failed and we were unable to recover it. 00:28:56.864 [2024-07-25 01:29:19.291530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.864 [2024-07-25 01:29:19.291546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.864 qpair failed and we were unable to recover it. 00:28:56.864 [2024-07-25 01:29:19.291962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.864 [2024-07-25 01:29:19.291979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.864 qpair failed and we were unable to recover it. 00:28:56.864 [2024-07-25 01:29:19.292344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.864 [2024-07-25 01:29:19.292362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.864 qpair failed and we were unable to recover it. 00:28:56.864 [2024-07-25 01:29:19.292793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.864 [2024-07-25 01:29:19.292810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.864 qpair failed and we were unable to recover it. 00:28:56.864 [2024-07-25 01:29:19.293298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.864 [2024-07-25 01:29:19.293317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.864 qpair failed and we were unable to recover it. 00:28:56.864 [2024-07-25 01:29:19.293668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.864 [2024-07-25 01:29:19.293684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.864 qpair failed and we were unable to recover it. 00:28:56.864 [2024-07-25 01:29:19.294032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.864 [2024-07-25 01:29:19.294059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.864 qpair failed and we were unable to recover it. 00:28:56.864 [2024-07-25 01:29:19.294526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.864 [2024-07-25 01:29:19.294540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.864 qpair failed and we were unable to recover it. 00:28:56.864 [2024-07-25 01:29:19.294975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.864 [2024-07-25 01:29:19.294989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.864 qpair failed and we were unable to recover it. 00:28:56.864 [2024-07-25 01:29:19.295398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.864 [2024-07-25 01:29:19.295413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.864 qpair failed and we were unable to recover it. 00:28:56.864 [2024-07-25 01:29:19.295877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.864 [2024-07-25 01:29:19.295890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.864 qpair failed and we were unable to recover it. 00:28:56.864 [2024-07-25 01:29:19.296236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.864 [2024-07-25 01:29:19.296250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.864 qpair failed and we were unable to recover it. 00:28:56.864 [2024-07-25 01:29:19.296683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.864 [2024-07-25 01:29:19.296696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.864 qpair failed and we were unable to recover it. 00:28:56.864 [2024-07-25 01:29:19.297116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.864 [2024-07-25 01:29:19.297131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.864 qpair failed and we were unable to recover it. 00:28:56.864 [2024-07-25 01:29:19.297358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.864 [2024-07-25 01:29:19.297372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.864 qpair failed and we were unable to recover it. 00:28:56.864 [2024-07-25 01:29:19.297592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.864 [2024-07-25 01:29:19.297607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.864 qpair failed and we were unable to recover it. 00:28:56.864 [2024-07-25 01:29:19.297941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.864 [2024-07-25 01:29:19.297955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.864 qpair failed and we were unable to recover it. 00:28:56.864 [2024-07-25 01:29:19.298323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.864 [2024-07-25 01:29:19.298338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.864 qpair failed and we were unable to recover it. 00:28:56.864 [2024-07-25 01:29:19.298707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.864 [2024-07-25 01:29:19.298721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.864 qpair failed and we were unable to recover it. 00:28:56.864 [2024-07-25 01:29:19.299074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.864 [2024-07-25 01:29:19.299088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.864 qpair failed and we were unable to recover it. 00:28:56.864 [2024-07-25 01:29:19.299514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.864 [2024-07-25 01:29:19.299528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.864 qpair failed and we were unable to recover it. 00:28:56.864 [2024-07-25 01:29:19.299762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.864 [2024-07-25 01:29:19.299775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.864 qpair failed and we were unable to recover it. 00:28:56.864 [2024-07-25 01:29:19.300183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.864 [2024-07-25 01:29:19.300197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.864 qpair failed and we were unable to recover it. 00:28:56.864 [2024-07-25 01:29:19.300600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.864 [2024-07-25 01:29:19.300615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.864 qpair failed and we were unable to recover it. 00:28:56.864 [2024-07-25 01:29:19.301020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.865 [2024-07-25 01:29:19.301034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.865 qpair failed and we were unable to recover it. 00:28:56.865 [2024-07-25 01:29:19.301454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.865 [2024-07-25 01:29:19.301468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.865 qpair failed and we were unable to recover it. 00:28:56.865 [2024-07-25 01:29:19.301809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.865 [2024-07-25 01:29:19.301822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.865 qpair failed and we were unable to recover it. 00:28:56.865 [2024-07-25 01:29:19.302322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.865 [2024-07-25 01:29:19.302337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.865 qpair failed and we were unable to recover it. 00:28:56.865 [2024-07-25 01:29:19.302574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.865 [2024-07-25 01:29:19.302590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.865 qpair failed and we were unable to recover it. 00:28:56.865 [2024-07-25 01:29:19.302956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.865 [2024-07-25 01:29:19.302975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.865 qpair failed and we were unable to recover it. 00:28:56.865 [2024-07-25 01:29:19.303329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.865 [2024-07-25 01:29:19.303344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.865 qpair failed and we were unable to recover it. 00:28:56.865 [2024-07-25 01:29:19.303681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.865 [2024-07-25 01:29:19.303694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.865 qpair failed and we were unable to recover it. 00:28:56.865 [2024-07-25 01:29:19.304156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.865 [2024-07-25 01:29:19.304171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.865 qpair failed and we were unable to recover it. 00:28:56.865 [2024-07-25 01:29:19.304535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.865 [2024-07-25 01:29:19.304549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.865 qpair failed and we were unable to recover it. 00:28:56.865 [2024-07-25 01:29:19.304995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.865 [2024-07-25 01:29:19.305010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.865 qpair failed and we were unable to recover it. 00:28:56.865 [2024-07-25 01:29:19.305209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.865 [2024-07-25 01:29:19.305223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.865 qpair failed and we were unable to recover it. 00:28:56.865 [2024-07-25 01:29:19.305669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.865 [2024-07-25 01:29:19.305683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.865 qpair failed and we were unable to recover it. 00:28:56.865 [2024-07-25 01:29:19.306038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.865 [2024-07-25 01:29:19.306058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.865 qpair failed and we were unable to recover it. 00:28:56.865 [2024-07-25 01:29:19.306461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.865 [2024-07-25 01:29:19.306477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.865 qpair failed and we were unable to recover it. 00:28:56.865 [2024-07-25 01:29:19.306827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.865 [2024-07-25 01:29:19.306841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.865 qpair failed and we were unable to recover it. 00:28:56.865 [2024-07-25 01:29:19.307180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.865 [2024-07-25 01:29:19.307194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.865 qpair failed and we were unable to recover it. 00:28:56.865 [2024-07-25 01:29:19.307525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.865 [2024-07-25 01:29:19.307540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.865 qpair failed and we were unable to recover it. 00:28:56.865 [2024-07-25 01:29:19.308038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.865 [2024-07-25 01:29:19.308058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.865 qpair failed and we were unable to recover it. 00:28:56.865 [2024-07-25 01:29:19.308425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.865 [2024-07-25 01:29:19.308439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.865 qpair failed and we were unable to recover it. 00:28:56.865 [2024-07-25 01:29:19.309094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.865 [2024-07-25 01:29:19.309109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.865 qpair failed and we were unable to recover it. 00:28:56.865 [2024-07-25 01:29:19.309610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.865 [2024-07-25 01:29:19.309624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.865 qpair failed and we were unable to recover it. 00:28:56.865 [2024-07-25 01:29:19.309978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.865 [2024-07-25 01:29:19.309993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.865 qpair failed and we were unable to recover it. 00:28:56.865 [2024-07-25 01:29:19.310363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.865 [2024-07-25 01:29:19.310379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.865 qpair failed and we were unable to recover it. 00:28:56.865 [2024-07-25 01:29:19.310733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.865 [2024-07-25 01:29:19.310749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.865 qpair failed and we were unable to recover it. 00:28:56.865 [2024-07-25 01:29:19.311153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.865 [2024-07-25 01:29:19.311168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.865 qpair failed and we were unable to recover it. 00:28:56.865 [2024-07-25 01:29:19.311613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.865 [2024-07-25 01:29:19.311628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.865 qpair failed and we were unable to recover it. 00:28:56.865 [2024-07-25 01:29:19.312035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.865 [2024-07-25 01:29:19.312054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.865 qpair failed and we were unable to recover it. 00:28:56.865 [2024-07-25 01:29:19.312417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.865 [2024-07-25 01:29:19.312434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.865 qpair failed and we were unable to recover it. 00:28:56.865 [2024-07-25 01:29:19.312793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.865 [2024-07-25 01:29:19.312806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.865 qpair failed and we were unable to recover it. 00:28:56.865 [2024-07-25 01:29:19.313274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.865 [2024-07-25 01:29:19.313289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.865 qpair failed and we were unable to recover it. 00:28:56.865 [2024-07-25 01:29:19.313887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.865 [2024-07-25 01:29:19.313909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.865 qpair failed and we were unable to recover it. 00:28:56.865 [2024-07-25 01:29:19.314267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.865 [2024-07-25 01:29:19.314283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.865 qpair failed and we were unable to recover it. 00:28:56.865 [2024-07-25 01:29:19.314617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.865 [2024-07-25 01:29:19.314631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.865 qpair failed and we were unable to recover it. 00:28:56.865 [2024-07-25 01:29:19.314994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.865 [2024-07-25 01:29:19.315008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.865 qpair failed and we were unable to recover it. 00:28:56.865 [2024-07-25 01:29:19.315361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.865 [2024-07-25 01:29:19.315376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.865 qpair failed and we were unable to recover it. 00:28:56.865 [2024-07-25 01:29:19.315715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.865 [2024-07-25 01:29:19.315729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.865 qpair failed and we were unable to recover it. 00:28:56.865 [2024-07-25 01:29:19.316087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.865 [2024-07-25 01:29:19.316102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.865 qpair failed and we were unable to recover it. 00:28:56.866 [2024-07-25 01:29:19.316556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.866 [2024-07-25 01:29:19.316570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.866 qpair failed and we were unable to recover it. 00:28:56.866 [2024-07-25 01:29:19.316919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.866 [2024-07-25 01:29:19.316934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.866 qpair failed and we were unable to recover it. 00:28:56.866 [2024-07-25 01:29:19.317326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.866 [2024-07-25 01:29:19.317340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.866 qpair failed and we were unable to recover it. 00:28:56.866 [2024-07-25 01:29:19.317572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.866 [2024-07-25 01:29:19.317587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.866 qpair failed and we were unable to recover it. 00:28:56.866 [2024-07-25 01:29:19.317946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.866 [2024-07-25 01:29:19.317960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.866 qpair failed and we were unable to recover it. 00:28:56.866 [2024-07-25 01:29:19.318330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.866 [2024-07-25 01:29:19.318346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.866 qpair failed and we were unable to recover it. 00:28:56.866 [2024-07-25 01:29:19.318694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.866 [2024-07-25 01:29:19.318712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.866 qpair failed and we were unable to recover it. 00:28:56.866 [2024-07-25 01:29:19.318971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.866 [2024-07-25 01:29:19.318984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.866 qpair failed and we were unable to recover it. 00:28:56.866 [2024-07-25 01:29:19.319315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.866 [2024-07-25 01:29:19.319330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.866 qpair failed and we were unable to recover it. 00:28:56.866 [2024-07-25 01:29:19.319687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.866 [2024-07-25 01:29:19.319702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.866 qpair failed and we were unable to recover it. 00:28:56.866 [2024-07-25 01:29:19.320057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.866 [2024-07-25 01:29:19.320072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.866 qpair failed and we were unable to recover it. 00:28:56.866 [2024-07-25 01:29:19.320488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.866 [2024-07-25 01:29:19.320503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.866 qpair failed and we were unable to recover it. 00:28:56.866 [2024-07-25 01:29:19.320910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.866 [2024-07-25 01:29:19.320924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.866 qpair failed and we were unable to recover it. 00:28:56.866 [2024-07-25 01:29:19.321330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.866 [2024-07-25 01:29:19.321345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.866 qpair failed and we were unable to recover it. 00:28:56.866 [2024-07-25 01:29:19.321762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.866 [2024-07-25 01:29:19.321776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.866 qpair failed and we were unable to recover it. 00:28:56.866 [2024-07-25 01:29:19.322180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.866 [2024-07-25 01:29:19.322194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.866 qpair failed and we were unable to recover it. 00:28:56.866 [2024-07-25 01:29:19.322608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.866 [2024-07-25 01:29:19.322622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.866 qpair failed and we were unable to recover it. 00:28:56.866 [2024-07-25 01:29:19.322964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.866 [2024-07-25 01:29:19.322979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.866 qpair failed and we were unable to recover it. 00:28:56.866 [2024-07-25 01:29:19.323335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.866 [2024-07-25 01:29:19.323351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.866 qpair failed and we were unable to recover it. 00:28:56.866 [2024-07-25 01:29:19.323778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.866 [2024-07-25 01:29:19.323792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.866 qpair failed and we were unable to recover it. 00:28:56.866 [2024-07-25 01:29:19.323792] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:56.866 [2024-07-25 01:29:19.323820] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:56.866 [2024-07-25 01:29:19.323827] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:56.866 [2024-07-25 01:29:19.323833] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:56.866 [2024-07-25 01:29:19.323839] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:56.866 [2024-07-25 01:29:19.324158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.866 [2024-07-25 01:29:19.324176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.866 qpair failed and we were unable to recover it. 00:28:56.866 [2024-07-25 01:29:19.324189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:28:56.866 [2024-07-25 01:29:19.324524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.866 [2024-07-25 01:29:19.324540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.866 qpair failed and we were unable to recover it. 00:28:56.866 [2024-07-25 01:29:19.324549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:28:56.866 [2024-07-25 01:29:19.324880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.866 [2024-07-25 01:29:19.324895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.866 qpair failed and we were unable to recover it. 00:28:56.866 [2024-07-25 01:29:19.324878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:28:56.866 [2024-07-25 01:29:19.324879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:56.866 [2024-07-25 01:29:19.325311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.866 [2024-07-25 01:29:19.325325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.866 qpair failed and we were unable to recover it. 00:28:56.866 [2024-07-25 01:29:19.325722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.866 [2024-07-25 01:29:19.325736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.866 qpair failed and we were unable to recover it. 00:28:56.866 [2024-07-25 01:29:19.326099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.866 [2024-07-25 01:29:19.326114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.866 qpair failed and we were unable to recover it. 00:28:56.866 [2024-07-25 01:29:19.326464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.866 [2024-07-25 01:29:19.326479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.866 qpair failed and we were unable to recover it. 00:28:56.866 [2024-07-25 01:29:19.326903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.866 [2024-07-25 01:29:19.326917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.866 qpair failed and we were unable to recover it. 00:28:56.866 [2024-07-25 01:29:19.327271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.866 [2024-07-25 01:29:19.327285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.866 qpair failed and we were unable to recover it. 00:28:56.866 [2024-07-25 01:29:19.327920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.866 [2024-07-25 01:29:19.327933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.866 qpair failed and we were unable to recover it. 00:28:56.866 [2024-07-25 01:29:19.328366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.866 [2024-07-25 01:29:19.328381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.866 qpair failed and we were unable to recover it. 00:28:56.866 [2024-07-25 01:29:19.328710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.866 [2024-07-25 01:29:19.328725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.866 qpair failed and we were unable to recover it. 00:28:56.866 [2024-07-25 01:29:19.329229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.866 [2024-07-25 01:29:19.329243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.866 qpair failed and we were unable to recover it. 00:28:56.866 [2024-07-25 01:29:19.329593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.866 [2024-07-25 01:29:19.329607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.866 qpair failed and we were unable to recover it. 00:28:56.866 [2024-07-25 01:29:19.330065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.867 [2024-07-25 01:29:19.330081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.867 qpair failed and we were unable to recover it. 00:28:56.867 [2024-07-25 01:29:19.330441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.867 [2024-07-25 01:29:19.330455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.867 qpair failed and we were unable to recover it. 00:28:56.867 [2024-07-25 01:29:19.330820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.867 [2024-07-25 01:29:19.330835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.867 qpair failed and we were unable to recover it. 00:28:56.867 [2024-07-25 01:29:19.331248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.867 [2024-07-25 01:29:19.331263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:56.867 qpair failed and we were unable to recover it. 00:28:57.138 [2024-07-25 01:29:19.331619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-07-25 01:29:19.331634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-07-25 01:29:19.332041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-07-25 01:29:19.332063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-07-25 01:29:19.332514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-07-25 01:29:19.332529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-07-25 01:29:19.332954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-07-25 01:29:19.332969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-07-25 01:29:19.333266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-07-25 01:29:19.333280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-07-25 01:29:19.333617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-07-25 01:29:19.333636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-07-25 01:29:19.334136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-07-25 01:29:19.334152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-07-25 01:29:19.334586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-07-25 01:29:19.334600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-07-25 01:29:19.335140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-07-25 01:29:19.335155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-07-25 01:29:19.335392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-07-25 01:29:19.335406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-07-25 01:29:19.335769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-07-25 01:29:19.335784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-07-25 01:29:19.336291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-07-25 01:29:19.336307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-07-25 01:29:19.336673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-07-25 01:29:19.336689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-07-25 01:29:19.337114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-07-25 01:29:19.337131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-07-25 01:29:19.337566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-07-25 01:29:19.337582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-07-25 01:29:19.337985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-07-25 01:29:19.337999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-07-25 01:29:19.338474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-07-25 01:29:19.338490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-07-25 01:29:19.338850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-07-25 01:29:19.338866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-07-25 01:29:19.339339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-07-25 01:29:19.339356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-07-25 01:29:19.339765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-07-25 01:29:19.339782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-07-25 01:29:19.340261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-07-25 01:29:19.340277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-07-25 01:29:19.340639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-07-25 01:29:19.340662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-07-25 01:29:19.341086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-07-25 01:29:19.341102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-07-25 01:29:19.341591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-07-25 01:29:19.341608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-07-25 01:29:19.342031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-07-25 01:29:19.342057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-07-25 01:29:19.342529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-07-25 01:29:19.342544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-07-25 01:29:19.343002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-07-25 01:29:19.343017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-07-25 01:29:19.343493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-07-25 01:29:19.343509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-07-25 01:29:19.343958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-07-25 01:29:19.343974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-07-25 01:29:19.344402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-07-25 01:29:19.344419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-07-25 01:29:19.344794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-07-25 01:29:19.344809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-07-25 01:29:19.345238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-07-25 01:29:19.345255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-07-25 01:29:19.345636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-07-25 01:29:19.345651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-07-25 01:29:19.346137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-07-25 01:29:19.346154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-07-25 01:29:19.346526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-07-25 01:29:19.346540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-07-25 01:29:19.347168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-07-25 01:29:19.347187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-07-25 01:29:19.347811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-07-25 01:29:19.347827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-07-25 01:29:19.348449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-07-25 01:29:19.348464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-07-25 01:29:19.348882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-07-25 01:29:19.348898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-07-25 01:29:19.349391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-07-25 01:29:19.349408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-07-25 01:29:19.349874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-07-25 01:29:19.349888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-07-25 01:29:19.350321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-07-25 01:29:19.350337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-07-25 01:29:19.350693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-07-25 01:29:19.350708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-07-25 01:29:19.351201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-07-25 01:29:19.351216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-07-25 01:29:19.351569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-07-25 01:29:19.351584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-07-25 01:29:19.352023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-07-25 01:29:19.352049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-07-25 01:29:19.352498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-07-25 01:29:19.352515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-07-25 01:29:19.352972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-07-25 01:29:19.352988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-07-25 01:29:19.353398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-07-25 01:29:19.353414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-07-25 01:29:19.353834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-07-25 01:29:19.353849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-07-25 01:29:19.354260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-07-25 01:29:19.354275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-07-25 01:29:19.354632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-07-25 01:29:19.354647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-07-25 01:29:19.355070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-07-25 01:29:19.355086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-07-25 01:29:19.355505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-07-25 01:29:19.355520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-07-25 01:29:19.355974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-07-25 01:29:19.355989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-07-25 01:29:19.356428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-07-25 01:29:19.356444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-07-25 01:29:19.356803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-07-25 01:29:19.356818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-07-25 01:29:19.357288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-07-25 01:29:19.357304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-07-25 01:29:19.357715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-07-25 01:29:19.357730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-07-25 01:29:19.358231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-07-25 01:29:19.358247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-07-25 01:29:19.358685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-07-25 01:29:19.358699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-07-25 01:29:19.359153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-07-25 01:29:19.359169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-07-25 01:29:19.359605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-07-25 01:29:19.359620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.140 [2024-07-25 01:29:19.360075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-07-25 01:29:19.360091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-07-25 01:29:19.360510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-07-25 01:29:19.360525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-07-25 01:29:19.360929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-07-25 01:29:19.360943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-07-25 01:29:19.361310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-07-25 01:29:19.361325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-07-25 01:29:19.361695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-07-25 01:29:19.361710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-07-25 01:29:19.362173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-07-25 01:29:19.362188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-07-25 01:29:19.362558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-07-25 01:29:19.362572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-07-25 01:29:19.363046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-07-25 01:29:19.363060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-07-25 01:29:19.363480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-07-25 01:29:19.363494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-07-25 01:29:19.363911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-07-25 01:29:19.363925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-07-25 01:29:19.364338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-07-25 01:29:19.364352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-07-25 01:29:19.364768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-07-25 01:29:19.364781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-07-25 01:29:19.365241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-07-25 01:29:19.365257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-07-25 01:29:19.365673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-07-25 01:29:19.365688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-07-25 01:29:19.366154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-07-25 01:29:19.366169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-07-25 01:29:19.366588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-07-25 01:29:19.366603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-07-25 01:29:19.366971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-07-25 01:29:19.366986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-07-25 01:29:19.367455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-07-25 01:29:19.367472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-07-25 01:29:19.367992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-07-25 01:29:19.368007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-07-25 01:29:19.368446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-07-25 01:29:19.368462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-07-25 01:29:19.368924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-07-25 01:29:19.368940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-07-25 01:29:19.369431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-07-25 01:29:19.369447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-07-25 01:29:19.369867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-07-25 01:29:19.369885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-07-25 01:29:19.370353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-07-25 01:29:19.370369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-07-25 01:29:19.370783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-07-25 01:29:19.370799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-07-25 01:29:19.371275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-07-25 01:29:19.371291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-07-25 01:29:19.371705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-07-25 01:29:19.371720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-07-25 01:29:19.372132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-07-25 01:29:19.372148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-07-25 01:29:19.372561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-07-25 01:29:19.372576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-07-25 01:29:19.373077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-07-25 01:29:19.373093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-07-25 01:29:19.373532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-07-25 01:29:19.373546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-07-25 01:29:19.373965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-07-25 01:29:19.373980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-07-25 01:29:19.374468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-07-25 01:29:19.374485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-07-25 01:29:19.374985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-07-25 01:29:19.375001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-07-25 01:29:19.375459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-07-25 01:29:19.375474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-07-25 01:29:19.375822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-07-25 01:29:19.375837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-07-25 01:29:19.376270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-07-25 01:29:19.376286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-07-25 01:29:19.376639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-07-25 01:29:19.376655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-07-25 01:29:19.377158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-07-25 01:29:19.377174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-07-25 01:29:19.377824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-07-25 01:29:19.377839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-07-25 01:29:19.378487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-07-25 01:29:19.378503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-07-25 01:29:19.378976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-07-25 01:29:19.378991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-07-25 01:29:19.379401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-07-25 01:29:19.379416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-07-25 01:29:19.379889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-07-25 01:29:19.379903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-07-25 01:29:19.380441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-07-25 01:29:19.380458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-07-25 01:29:19.381188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-07-25 01:29:19.381205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-07-25 01:29:19.381575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-07-25 01:29:19.381589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-07-25 01:29:19.381950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-07-25 01:29:19.381964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-07-25 01:29:19.382431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-07-25 01:29:19.382446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-07-25 01:29:19.382818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-07-25 01:29:19.382833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-07-25 01:29:19.383245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-07-25 01:29:19.383259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-07-25 01:29:19.383636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-07-25 01:29:19.383649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-07-25 01:29:19.383943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-07-25 01:29:19.383956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-07-25 01:29:19.384371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-07-25 01:29:19.384386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-07-25 01:29:19.384750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-07-25 01:29:19.384764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-07-25 01:29:19.385185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-07-25 01:29:19.385199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-07-25 01:29:19.385552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-07-25 01:29:19.385565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-07-25 01:29:19.386137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-07-25 01:29:19.386153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-07-25 01:29:19.386494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-07-25 01:29:19.386508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-07-25 01:29:19.387027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-07-25 01:29:19.387041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-07-25 01:29:19.387525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-07-25 01:29:19.387540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-07-25 01:29:19.387903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-07-25 01:29:19.387917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-07-25 01:29:19.388341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-07-25 01:29:19.388358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-07-25 01:29:19.388715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-07-25 01:29:19.388730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-07-25 01:29:19.389170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-07-25 01:29:19.389185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-07-25 01:29:19.389555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-07-25 01:29:19.389569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-07-25 01:29:19.389940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-07-25 01:29:19.389953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-07-25 01:29:19.390403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-07-25 01:29:19.390418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-07-25 01:29:19.390858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-07-25 01:29:19.390872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-07-25 01:29:19.391311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-07-25 01:29:19.391326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-07-25 01:29:19.391686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-07-25 01:29:19.391700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-07-25 01:29:19.392111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-07-25 01:29:19.392125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-07-25 01:29:19.392635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-07-25 01:29:19.392649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-07-25 01:29:19.393365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-07-25 01:29:19.393381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-07-25 01:29:19.393893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-07-25 01:29:19.393907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-07-25 01:29:19.394400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-07-25 01:29:19.394415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-07-25 01:29:19.394782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-07-25 01:29:19.394796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-07-25 01:29:19.395218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-07-25 01:29:19.395233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-07-25 01:29:19.395594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-07-25 01:29:19.395608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-07-25 01:29:19.396040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-07-25 01:29:19.396060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-07-25 01:29:19.396390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-07-25 01:29:19.396404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-07-25 01:29:19.397057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-07-25 01:29:19.397072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-07-25 01:29:19.397390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-07-25 01:29:19.397404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-07-25 01:29:19.397763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-07-25 01:29:19.397777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-07-25 01:29:19.398202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-07-25 01:29:19.398216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-07-25 01:29:19.398576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-07-25 01:29:19.398590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-07-25 01:29:19.399039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-07-25 01:29:19.399058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-07-25 01:29:19.399474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-07-25 01:29:19.399487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-07-25 01:29:19.399899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-07-25 01:29:19.399913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-07-25 01:29:19.400409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-07-25 01:29:19.400423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-07-25 01:29:19.400895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-07-25 01:29:19.400909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-07-25 01:29:19.401330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-07-25 01:29:19.401344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-07-25 01:29:19.401758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-07-25 01:29:19.401772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-07-25 01:29:19.402236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-07-25 01:29:19.402250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-07-25 01:29:19.402664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-07-25 01:29:19.402677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-07-25 01:29:19.403117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-07-25 01:29:19.403131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-07-25 01:29:19.403594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-07-25 01:29:19.403608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-07-25 01:29:19.403965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-07-25 01:29:19.403978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-07-25 01:29:19.404387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-07-25 01:29:19.404401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-07-25 01:29:19.404815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-07-25 01:29:19.404828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-07-25 01:29:19.405266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-07-25 01:29:19.405280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-07-25 01:29:19.405638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-07-25 01:29:19.405651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-07-25 01:29:19.406114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-07-25 01:29:19.406131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-07-25 01:29:19.406546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-07-25 01:29:19.406560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-07-25 01:29:19.406997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-07-25 01:29:19.407011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-07-25 01:29:19.407432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-07-25 01:29:19.407446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.143 [2024-07-25 01:29:19.407881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-07-25 01:29:19.407895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-07-25 01:29:19.408322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-07-25 01:29:19.408336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-07-25 01:29:19.408921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-07-25 01:29:19.408935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-07-25 01:29:19.409365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-07-25 01:29:19.409380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-07-25 01:29:19.409756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-07-25 01:29:19.409770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-07-25 01:29:19.410273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-07-25 01:29:19.410288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-07-25 01:29:19.410718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-07-25 01:29:19.410732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-07-25 01:29:19.411164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-07-25 01:29:19.411178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-07-25 01:29:19.411521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-07-25 01:29:19.411535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-07-25 01:29:19.412080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-07-25 01:29:19.412094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-07-25 01:29:19.412533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-07-25 01:29:19.412547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-07-25 01:29:19.413030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-07-25 01:29:19.413051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-07-25 01:29:19.413481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-07-25 01:29:19.413494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-07-25 01:29:19.413919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-07-25 01:29:19.413933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-07-25 01:29:19.414363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-07-25 01:29:19.414378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-07-25 01:29:19.414820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-07-25 01:29:19.414833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-07-25 01:29:19.415321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-07-25 01:29:19.415335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-07-25 01:29:19.415752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-07-25 01:29:19.415765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-07-25 01:29:19.416208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-07-25 01:29:19.416234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-07-25 01:29:19.416743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-07-25 01:29:19.416757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-07-25 01:29:19.417207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-07-25 01:29:19.417221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-07-25 01:29:19.417728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-07-25 01:29:19.417742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-07-25 01:29:19.418509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-07-25 01:29:19.418525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-07-25 01:29:19.419030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-07-25 01:29:19.419048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-07-25 01:29:19.419473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-07-25 01:29:19.419486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-07-25 01:29:19.419849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-07-25 01:29:19.419863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-07-25 01:29:19.420288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-07-25 01:29:19.420302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-07-25 01:29:19.420766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-07-25 01:29:19.420779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-07-25 01:29:19.421175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-07-25 01:29:19.421189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-07-25 01:29:19.421603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-07-25 01:29:19.421617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-07-25 01:29:19.422026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-07-25 01:29:19.422039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-07-25 01:29:19.422521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-07-25 01:29:19.422535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-07-25 01:29:19.422966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-07-25 01:29:19.422980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-07-25 01:29:19.423323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-07-25 01:29:19.423337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-07-25 01:29:19.423765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-07-25 01:29:19.423779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-07-25 01:29:19.424263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-07-25 01:29:19.424278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-07-25 01:29:19.424634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-07-25 01:29:19.424650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-07-25 01:29:19.425092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-07-25 01:29:19.425106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-07-25 01:29:19.425502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-07-25 01:29:19.425515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-07-25 01:29:19.425978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-07-25 01:29:19.425990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-07-25 01:29:19.426486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-07-25 01:29:19.426500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-07-25 01:29:19.426908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-07-25 01:29:19.426920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-07-25 01:29:19.427331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-07-25 01:29:19.427344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-07-25 01:29:19.427687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-07-25 01:29:19.427699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-07-25 01:29:19.428161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-07-25 01:29:19.428174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-07-25 01:29:19.428595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-07-25 01:29:19.428608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-07-25 01:29:19.429035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-07-25 01:29:19.429052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-07-25 01:29:19.429416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-07-25 01:29:19.429430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-07-25 01:29:19.429842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-07-25 01:29:19.429856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-07-25 01:29:19.430265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-07-25 01:29:19.430279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-07-25 01:29:19.430778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-07-25 01:29:19.430792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-07-25 01:29:19.431208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-07-25 01:29:19.431222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-07-25 01:29:19.431623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-07-25 01:29:19.431637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-07-25 01:29:19.432065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-07-25 01:29:19.432080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-07-25 01:29:19.432489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-07-25 01:29:19.432503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-07-25 01:29:19.432990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-07-25 01:29:19.433004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-07-25 01:29:19.433465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-07-25 01:29:19.433480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-07-25 01:29:19.433753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-07-25 01:29:19.433766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-07-25 01:29:19.434170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-07-25 01:29:19.434184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-07-25 01:29:19.434580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-07-25 01:29:19.434593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-07-25 01:29:19.435002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-07-25 01:29:19.435016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-07-25 01:29:19.435503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-07-25 01:29:19.435518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-07-25 01:29:19.436002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-07-25 01:29:19.436015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-07-25 01:29:19.436504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-07-25 01:29:19.436519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-07-25 01:29:19.436868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-07-25 01:29:19.436882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-07-25 01:29:19.437343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-07-25 01:29:19.437358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-07-25 01:29:19.437818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-07-25 01:29:19.437832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-07-25 01:29:19.438292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-07-25 01:29:19.438306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-07-25 01:29:19.438743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-07-25 01:29:19.438757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-07-25 01:29:19.439241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-07-25 01:29:19.439255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-07-25 01:29:19.439415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-07-25 01:29:19.439428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-07-25 01:29:19.439924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-07-25 01:29:19.439938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-07-25 01:29:19.440373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-07-25 01:29:19.440387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-07-25 01:29:19.440793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-07-25 01:29:19.440807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.145 [2024-07-25 01:29:19.441166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-07-25 01:29:19.441180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-07-25 01:29:19.441576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-07-25 01:29:19.441589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-07-25 01:29:19.442063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-07-25 01:29:19.442079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-07-25 01:29:19.442540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-07-25 01:29:19.442554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-07-25 01:29:19.442906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-07-25 01:29:19.442920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-07-25 01:29:19.443406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-07-25 01:29:19.443420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-07-25 01:29:19.443901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-07-25 01:29:19.443915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-07-25 01:29:19.444379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-07-25 01:29:19.444393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-07-25 01:29:19.444732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-07-25 01:29:19.444746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-07-25 01:29:19.445172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-07-25 01:29:19.445186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-07-25 01:29:19.445598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-07-25 01:29:19.445612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-07-25 01:29:19.446021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-07-25 01:29:19.446035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-07-25 01:29:19.446442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-07-25 01:29:19.446456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-07-25 01:29:19.446939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-07-25 01:29:19.446952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-07-25 01:29:19.447283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-07-25 01:29:19.447297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-07-25 01:29:19.447780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-07-25 01:29:19.447793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-07-25 01:29:19.448281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-07-25 01:29:19.448295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-07-25 01:29:19.448704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-07-25 01:29:19.448717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-07-25 01:29:19.449121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-07-25 01:29:19.449135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-07-25 01:29:19.449533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-07-25 01:29:19.449547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-07-25 01:29:19.449962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-07-25 01:29:19.449975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-07-25 01:29:19.450459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-07-25 01:29:19.450473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-07-25 01:29:19.450935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-07-25 01:29:19.450948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-07-25 01:29:19.451411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-07-25 01:29:19.451424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-07-25 01:29:19.451823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-07-25 01:29:19.451836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-07-25 01:29:19.452296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-07-25 01:29:19.452310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-07-25 01:29:19.452652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-07-25 01:29:19.452665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-07-25 01:29:19.453147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-07-25 01:29:19.453161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-07-25 01:29:19.453645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-07-25 01:29:19.453658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-07-25 01:29:19.454074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-07-25 01:29:19.454088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-07-25 01:29:19.454485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-07-25 01:29:19.454499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-07-25 01:29:19.454908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-07-25 01:29:19.454921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-07-25 01:29:19.455334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-07-25 01:29:19.455348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-07-25 01:29:19.455834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-07-25 01:29:19.455847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-07-25 01:29:19.456198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-07-25 01:29:19.456213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-07-25 01:29:19.456671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-07-25 01:29:19.456685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-07-25 01:29:19.457165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-07-25 01:29:19.457179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-07-25 01:29:19.457523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-07-25 01:29:19.457537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-07-25 01:29:19.458020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-07-25 01:29:19.458034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-07-25 01:29:19.458444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-07-25 01:29:19.458458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-07-25 01:29:19.458802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-07-25 01:29:19.458816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-07-25 01:29:19.459277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-07-25 01:29:19.459291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-07-25 01:29:19.459701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-07-25 01:29:19.459717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-07-25 01:29:19.460150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-07-25 01:29:19.460165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-07-25 01:29:19.460627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-07-25 01:29:19.460641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-07-25 01:29:19.461105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-07-25 01:29:19.461119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-07-25 01:29:19.461607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-07-25 01:29:19.461620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-07-25 01:29:19.462082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-07-25 01:29:19.462097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-07-25 01:29:19.462439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-07-25 01:29:19.462453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-07-25 01:29:19.462848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-07-25 01:29:19.462861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-07-25 01:29:19.463293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-07-25 01:29:19.463307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-07-25 01:29:19.463768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-07-25 01:29:19.463782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-07-25 01:29:19.464215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-07-25 01:29:19.464229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-07-25 01:29:19.464702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-07-25 01:29:19.464715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-07-25 01:29:19.465129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-07-25 01:29:19.465144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-07-25 01:29:19.465557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-07-25 01:29:19.465571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-07-25 01:29:19.465987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-07-25 01:29:19.466001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-07-25 01:29:19.466471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-07-25 01:29:19.466485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-07-25 01:29:19.466945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-07-25 01:29:19.466959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-07-25 01:29:19.467454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-07-25 01:29:19.467469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-07-25 01:29:19.467977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-07-25 01:29:19.467991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-07-25 01:29:19.468340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-07-25 01:29:19.468354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-07-25 01:29:19.468748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-07-25 01:29:19.468761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-07-25 01:29:19.469243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-07-25 01:29:19.469257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-07-25 01:29:19.469651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-07-25 01:29:19.469665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-07-25 01:29:19.470075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-07-25 01:29:19.470089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-07-25 01:29:19.470299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-07-25 01:29:19.470313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-07-25 01:29:19.470720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-07-25 01:29:19.470733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-07-25 01:29:19.471154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-07-25 01:29:19.471168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-07-25 01:29:19.471653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-07-25 01:29:19.471667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-07-25 01:29:19.472152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-07-25 01:29:19.472166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-07-25 01:29:19.472598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-07-25 01:29:19.472611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-07-25 01:29:19.473095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-07-25 01:29:19.473109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-07-25 01:29:19.473445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-07-25 01:29:19.473458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-07-25 01:29:19.473853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-07-25 01:29:19.473867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-07-25 01:29:19.474263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-07-25 01:29:19.474277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-07-25 01:29:19.474762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-07-25 01:29:19.474776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-07-25 01:29:19.475236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-07-25 01:29:19.475250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-07-25 01:29:19.475685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-07-25 01:29:19.475698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-07-25 01:29:19.476157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-07-25 01:29:19.476171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-07-25 01:29:19.476660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-07-25 01:29:19.476673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-07-25 01:29:19.477133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-07-25 01:29:19.477147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-07-25 01:29:19.477549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-07-25 01:29:19.477565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-07-25 01:29:19.478049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-07-25 01:29:19.478063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-07-25 01:29:19.478524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-07-25 01:29:19.478538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-07-25 01:29:19.478936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-07-25 01:29:19.478950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-07-25 01:29:19.479359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-07-25 01:29:19.479373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-07-25 01:29:19.479832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-07-25 01:29:19.479846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-07-25 01:29:19.480252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-07-25 01:29:19.480266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-07-25 01:29:19.480756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-07-25 01:29:19.480770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-07-25 01:29:19.481230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-07-25 01:29:19.481244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-07-25 01:29:19.481603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-07-25 01:29:19.481617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-07-25 01:29:19.482046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-07-25 01:29:19.482060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-07-25 01:29:19.482458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-07-25 01:29:19.482472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-07-25 01:29:19.482816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-07-25 01:29:19.482830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-07-25 01:29:19.483343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-07-25 01:29:19.483358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-07-25 01:29:19.483703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-07-25 01:29:19.483717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-07-25 01:29:19.484197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-07-25 01:29:19.484211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-07-25 01:29:19.484615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-07-25 01:29:19.484629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-07-25 01:29:19.484971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-07-25 01:29:19.484984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-07-25 01:29:19.485470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-07-25 01:29:19.485484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-07-25 01:29:19.485899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-07-25 01:29:19.485913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-07-25 01:29:19.486397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-07-25 01:29:19.486411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-07-25 01:29:19.486838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-07-25 01:29:19.486852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-07-25 01:29:19.487311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-07-25 01:29:19.487325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-07-25 01:29:19.487753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-07-25 01:29:19.487767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-07-25 01:29:19.488177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-07-25 01:29:19.488191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-07-25 01:29:19.488599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-07-25 01:29:19.488613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-07-25 01:29:19.489099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-07-25 01:29:19.489113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-07-25 01:29:19.489578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-07-25 01:29:19.489592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-07-25 01:29:19.490100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-07-25 01:29:19.490114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-07-25 01:29:19.490526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-07-25 01:29:19.490540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-07-25 01:29:19.490977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-07-25 01:29:19.490991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-07-25 01:29:19.491399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-07-25 01:29:19.491412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-07-25 01:29:19.491836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-07-25 01:29:19.491849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-07-25 01:29:19.492175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-07-25 01:29:19.492189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-07-25 01:29:19.492661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-07-25 01:29:19.492674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-07-25 01:29:19.493097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-07-25 01:29:19.493111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-07-25 01:29:19.493323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-07-25 01:29:19.493336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-07-25 01:29:19.493682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-07-25 01:29:19.493695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-07-25 01:29:19.494094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-07-25 01:29:19.494108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-07-25 01:29:19.494539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-07-25 01:29:19.494553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-07-25 01:29:19.494966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-07-25 01:29:19.494979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-07-25 01:29:19.495412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-07-25 01:29:19.495426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-07-25 01:29:19.495863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-07-25 01:29:19.495876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-07-25 01:29:19.496365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-07-25 01:29:19.496379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-07-25 01:29:19.496839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-07-25 01:29:19.496852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-07-25 01:29:19.497341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-07-25 01:29:19.497355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-07-25 01:29:19.497770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-07-25 01:29:19.497784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-07-25 01:29:19.498266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-07-25 01:29:19.498280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-07-25 01:29:19.498743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-07-25 01:29:19.498756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-07-25 01:29:19.499259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-07-25 01:29:19.499273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-07-25 01:29:19.499510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-07-25 01:29:19.499524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-07-25 01:29:19.499950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-07-25 01:29:19.499964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-07-25 01:29:19.500377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-07-25 01:29:19.500391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-07-25 01:29:19.500819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-07-25 01:29:19.500833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-07-25 01:29:19.501231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-07-25 01:29:19.501245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-07-25 01:29:19.501598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-07-25 01:29:19.501611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-07-25 01:29:19.502071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-07-25 01:29:19.502085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-07-25 01:29:19.502570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-07-25 01:29:19.502584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-07-25 01:29:19.503088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-07-25 01:29:19.503102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-07-25 01:29:19.503577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-07-25 01:29:19.503590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-07-25 01:29:19.503982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-07-25 01:29:19.503996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-07-25 01:29:19.504482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-07-25 01:29:19.504497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.149 [2024-07-25 01:29:19.504977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-07-25 01:29:19.504990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-07-25 01:29:19.505481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-07-25 01:29:19.505496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-07-25 01:29:19.505656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-07-25 01:29:19.505670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-07-25 01:29:19.506129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-07-25 01:29:19.506143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-07-25 01:29:19.506550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-07-25 01:29:19.506564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-07-25 01:29:19.507025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-07-25 01:29:19.507047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-07-25 01:29:19.507392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-07-25 01:29:19.507406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-07-25 01:29:19.507892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-07-25 01:29:19.507905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-07-25 01:29:19.508301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-07-25 01:29:19.508315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-07-25 01:29:19.508742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-07-25 01:29:19.508755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-07-25 01:29:19.509214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-07-25 01:29:19.509228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-07-25 01:29:19.509639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-07-25 01:29:19.509652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-07-25 01:29:19.510083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-07-25 01:29:19.510097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-07-25 01:29:19.510557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-07-25 01:29:19.510571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-07-25 01:29:19.511053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-07-25 01:29:19.511067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-07-25 01:29:19.511575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-07-25 01:29:19.511589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-07-25 01:29:19.512067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-07-25 01:29:19.512081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-07-25 01:29:19.512517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-07-25 01:29:19.512530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-07-25 01:29:19.512933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-07-25 01:29:19.512947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-07-25 01:29:19.513345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-07-25 01:29:19.513360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-07-25 01:29:19.513771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-07-25 01:29:19.513785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-07-25 01:29:19.514196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-07-25 01:29:19.514210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-07-25 01:29:19.514613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-07-25 01:29:19.514626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-07-25 01:29:19.515108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-07-25 01:29:19.515121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-07-25 01:29:19.515278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-07-25 01:29:19.515292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-07-25 01:29:19.515682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-07-25 01:29:19.515695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-07-25 01:29:19.516176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-07-25 01:29:19.516190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-07-25 01:29:19.516677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-07-25 01:29:19.516691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-07-25 01:29:19.517093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-07-25 01:29:19.517107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-07-25 01:29:19.517503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-07-25 01:29:19.517517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-07-25 01:29:19.517836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-07-25 01:29:19.517850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-07-25 01:29:19.518334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-07-25 01:29:19.518348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-07-25 01:29:19.518831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-07-25 01:29:19.518845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-07-25 01:29:19.519254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-07-25 01:29:19.519268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-07-25 01:29:19.519755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-07-25 01:29:19.519768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-07-25 01:29:19.520178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-07-25 01:29:19.520198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-07-25 01:29:19.520536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-07-25 01:29:19.520550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-07-25 01:29:19.520984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-07-25 01:29:19.520997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-07-25 01:29:19.521463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-07-25 01:29:19.521477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-07-25 01:29:19.521938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-07-25 01:29:19.521952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-07-25 01:29:19.522296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-07-25 01:29:19.522310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-07-25 01:29:19.522749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-07-25 01:29:19.522762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-07-25 01:29:19.523179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-07-25 01:29:19.523193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-07-25 01:29:19.523606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-07-25 01:29:19.523620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-07-25 01:29:19.524082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-07-25 01:29:19.524096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-07-25 01:29:19.524581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-07-25 01:29:19.524598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-07-25 01:29:19.525108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-07-25 01:29:19.525122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-07-25 01:29:19.525607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-07-25 01:29:19.525621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-07-25 01:29:19.526048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-07-25 01:29:19.526063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-07-25 01:29:19.526511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-07-25 01:29:19.526525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-07-25 01:29:19.526924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-07-25 01:29:19.526938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-07-25 01:29:19.527424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-07-25 01:29:19.527438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-07-25 01:29:19.527896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-07-25 01:29:19.527910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-07-25 01:29:19.528393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-07-25 01:29:19.528407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-07-25 01:29:19.528867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-07-25 01:29:19.528880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-07-25 01:29:19.529360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-07-25 01:29:19.529374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-07-25 01:29:19.529803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-07-25 01:29:19.529816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-07-25 01:29:19.530228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-07-25 01:29:19.530242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-07-25 01:29:19.530704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-07-25 01:29:19.530718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-07-25 01:29:19.531182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-07-25 01:29:19.531196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-07-25 01:29:19.531686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-07-25 01:29:19.531700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-07-25 01:29:19.532211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-07-25 01:29:19.532225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-07-25 01:29:19.532588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-07-25 01:29:19.532602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-07-25 01:29:19.533080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-07-25 01:29:19.533095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-07-25 01:29:19.533509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-07-25 01:29:19.533522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-07-25 01:29:19.533921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-07-25 01:29:19.533934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-07-25 01:29:19.534360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-07-25 01:29:19.534374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-07-25 01:29:19.534774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-07-25 01:29:19.534787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-07-25 01:29:19.535271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-07-25 01:29:19.535285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-07-25 01:29:19.535715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-07-25 01:29:19.535729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-07-25 01:29:19.536102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-07-25 01:29:19.536117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-07-25 01:29:19.536600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-07-25 01:29:19.536613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-07-25 01:29:19.536956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-07-25 01:29:19.536970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-07-25 01:29:19.537314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-07-25 01:29:19.537329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-07-25 01:29:19.537734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-07-25 01:29:19.537747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-07-25 01:29:19.538179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-07-25 01:29:19.538193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-07-25 01:29:19.538605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-07-25 01:29:19.538618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-07-25 01:29:19.539100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-07-25 01:29:19.539114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-07-25 01:29:19.539528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-07-25 01:29:19.539541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-07-25 01:29:19.539932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-07-25 01:29:19.539945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-07-25 01:29:19.540428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-07-25 01:29:19.540442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-07-25 01:29:19.540927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-07-25 01:29:19.540940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-07-25 01:29:19.541356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-07-25 01:29:19.541371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-07-25 01:29:19.541773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-07-25 01:29:19.541786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-07-25 01:29:19.542248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-07-25 01:29:19.542262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-07-25 01:29:19.542682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-07-25 01:29:19.542697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-07-25 01:29:19.543184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-07-25 01:29:19.543199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-07-25 01:29:19.543602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-07-25 01:29:19.543616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-07-25 01:29:19.544096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-07-25 01:29:19.544110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-07-25 01:29:19.544527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-07-25 01:29:19.544540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-07-25 01:29:19.545025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-07-25 01:29:19.545039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-07-25 01:29:19.545511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-07-25 01:29:19.545525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-07-25 01:29:19.545879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-07-25 01:29:19.545893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-07-25 01:29:19.546374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-07-25 01:29:19.546388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-07-25 01:29:19.546869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-07-25 01:29:19.546883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-07-25 01:29:19.547228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-07-25 01:29:19.547242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-07-25 01:29:19.547663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-07-25 01:29:19.547677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-07-25 01:29:19.548166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-07-25 01:29:19.548180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-07-25 01:29:19.548588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-07-25 01:29:19.548602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-07-25 01:29:19.548947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-07-25 01:29:19.548961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-07-25 01:29:19.549458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-07-25 01:29:19.549472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-07-25 01:29:19.549966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-07-25 01:29:19.549979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-07-25 01:29:19.550388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-07-25 01:29:19.550403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-07-25 01:29:19.550881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-07-25 01:29:19.550894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-07-25 01:29:19.551376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-07-25 01:29:19.551390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-07-25 01:29:19.551850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-07-25 01:29:19.551864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-07-25 01:29:19.552276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-07-25 01:29:19.552290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-07-25 01:29:19.552592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-07-25 01:29:19.552605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-07-25 01:29:19.553037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-07-25 01:29:19.553060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-07-25 01:29:19.553524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-07-25 01:29:19.553538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-07-25 01:29:19.554022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-07-25 01:29:19.554036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-07-25 01:29:19.554473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-07-25 01:29:19.554488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-07-25 01:29:19.554832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-07-25 01:29:19.554846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-07-25 01:29:19.555330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-07-25 01:29:19.555345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-07-25 01:29:19.555706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-07-25 01:29:19.555719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-07-25 01:29:19.556124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-07-25 01:29:19.556138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-07-25 01:29:19.556571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-07-25 01:29:19.556584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-07-25 01:29:19.556989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-07-25 01:29:19.557003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-07-25 01:29:19.557364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-07-25 01:29:19.557379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-07-25 01:29:19.557778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-07-25 01:29:19.557791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-07-25 01:29:19.558131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-07-25 01:29:19.558145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-07-25 01:29:19.558607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-07-25 01:29:19.558621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-07-25 01:29:19.558832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-07-25 01:29:19.558845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-07-25 01:29:19.559331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-07-25 01:29:19.559345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-07-25 01:29:19.559760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-07-25 01:29:19.559774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-07-25 01:29:19.560261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-07-25 01:29:19.560277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-07-25 01:29:19.560761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-07-25 01:29:19.560775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-07-25 01:29:19.561189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-07-25 01:29:19.561203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-07-25 01:29:19.561598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-07-25 01:29:19.561612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-07-25 01:29:19.562073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-07-25 01:29:19.562087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-07-25 01:29:19.562432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-07-25 01:29:19.562446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-07-25 01:29:19.562930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-07-25 01:29:19.562944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-07-25 01:29:19.563450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-07-25 01:29:19.563465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-07-25 01:29:19.563862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-07-25 01:29:19.563875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-07-25 01:29:19.564308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-07-25 01:29:19.564322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-07-25 01:29:19.564730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-07-25 01:29:19.564743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-07-25 01:29:19.565201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-07-25 01:29:19.565215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-07-25 01:29:19.565623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-07-25 01:29:19.565637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-07-25 01:29:19.566034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-07-25 01:29:19.566052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-07-25 01:29:19.566470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-07-25 01:29:19.566484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-07-25 01:29:19.566885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-07-25 01:29:19.566899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-07-25 01:29:19.567364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-07-25 01:29:19.567378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-07-25 01:29:19.567868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-07-25 01:29:19.567881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-07-25 01:29:19.568250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-07-25 01:29:19.568264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-07-25 01:29:19.568673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-07-25 01:29:19.568687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-07-25 01:29:19.569146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-07-25 01:29:19.569160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-07-25 01:29:19.569638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-07-25 01:29:19.569652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-07-25 01:29:19.570082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-07-25 01:29:19.570096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-07-25 01:29:19.570572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-07-25 01:29:19.570586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-07-25 01:29:19.571073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-07-25 01:29:19.571088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-07-25 01:29:19.571550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-07-25 01:29:19.571564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-07-25 01:29:19.572070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-07-25 01:29:19.572084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-07-25 01:29:19.572488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-07-25 01:29:19.572502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-07-25 01:29:19.572991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-07-25 01:29:19.573004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-07-25 01:29:19.573489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-07-25 01:29:19.573503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-07-25 01:29:19.573964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-07-25 01:29:19.573978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-07-25 01:29:19.574391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-07-25 01:29:19.574406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-07-25 01:29:19.574869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-07-25 01:29:19.574883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-07-25 01:29:19.575292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-07-25 01:29:19.575306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-07-25 01:29:19.575792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-07-25 01:29:19.575806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-07-25 01:29:19.576291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-07-25 01:29:19.576307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-07-25 01:29:19.576724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-07-25 01:29:19.576738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-07-25 01:29:19.577223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-07-25 01:29:19.577237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-07-25 01:29:19.577722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-07-25 01:29:19.577735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-07-25 01:29:19.578128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-07-25 01:29:19.578142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-07-25 01:29:19.578536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-07-25 01:29:19.578555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-07-25 01:29:19.578765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-07-25 01:29:19.578779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-07-25 01:29:19.579238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-07-25 01:29:19.579252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-07-25 01:29:19.579668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-07-25 01:29:19.579682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-07-25 01:29:19.580091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-07-25 01:29:19.580105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-07-25 01:29:19.580590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-07-25 01:29:19.580604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-07-25 01:29:19.581061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-07-25 01:29:19.581075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-07-25 01:29:19.581559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-07-25 01:29:19.581573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-07-25 01:29:19.581966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-07-25 01:29:19.581980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-07-25 01:29:19.582392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-07-25 01:29:19.582406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-07-25 01:29:19.582865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-07-25 01:29:19.582878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-07-25 01:29:19.583363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-07-25 01:29:19.583377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-07-25 01:29:19.583839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-07-25 01:29:19.583853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-07-25 01:29:19.584278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-07-25 01:29:19.584292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-07-25 01:29:19.584693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-07-25 01:29:19.584707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-07-25 01:29:19.585137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-07-25 01:29:19.585152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-07-25 01:29:19.585585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-07-25 01:29:19.585599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-07-25 01:29:19.586059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-07-25 01:29:19.586073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-07-25 01:29:19.586555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-07-25 01:29:19.586569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-07-25 01:29:19.587075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-07-25 01:29:19.587090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-07-25 01:29:19.587519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-07-25 01:29:19.587532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-07-25 01:29:19.587953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-07-25 01:29:19.587967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-07-25 01:29:19.588426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-07-25 01:29:19.588441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-07-25 01:29:19.588940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-07-25 01:29:19.588954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-07-25 01:29:19.589361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-07-25 01:29:19.589376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-07-25 01:29:19.589867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-07-25 01:29:19.589881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-07-25 01:29:19.590366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-07-25 01:29:19.590381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-07-25 01:29:19.590796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-07-25 01:29:19.590810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-07-25 01:29:19.591168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-07-25 01:29:19.591182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-07-25 01:29:19.591607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-07-25 01:29:19.591620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-07-25 01:29:19.592104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-07-25 01:29:19.592119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-07-25 01:29:19.592527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-07-25 01:29:19.592541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-07-25 01:29:19.593024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-07-25 01:29:19.593038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-07-25 01:29:19.593501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-07-25 01:29:19.593515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-07-25 01:29:19.593941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-07-25 01:29:19.593955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-07-25 01:29:19.594438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-07-25 01:29:19.594452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-07-25 01:29:19.594804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-07-25 01:29:19.594818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-07-25 01:29:19.595158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-07-25 01:29:19.595172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-07-25 01:29:19.595657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-07-25 01:29:19.595670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-07-25 01:29:19.596130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-07-25 01:29:19.596145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-07-25 01:29:19.596620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-07-25 01:29:19.596636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-07-25 01:29:19.597094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-07-25 01:29:19.597108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-07-25 01:29:19.597617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-07-25 01:29:19.597630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-07-25 01:29:19.598114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-07-25 01:29:19.598129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-07-25 01:29:19.598598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-07-25 01:29:19.598611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-07-25 01:29:19.599022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-07-25 01:29:19.599036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-07-25 01:29:19.599467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-07-25 01:29:19.599481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-07-25 01:29:19.599966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-07-25 01:29:19.599980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-07-25 01:29:19.600394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-07-25 01:29:19.600409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-07-25 01:29:19.600806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-07-25 01:29:19.600819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-07-25 01:29:19.601299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-07-25 01:29:19.601314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-07-25 01:29:19.601799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-07-25 01:29:19.601813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-07-25 01:29:19.602168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-07-25 01:29:19.602182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-07-25 01:29:19.602593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-07-25 01:29:19.602607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.155 qpair failed and we were unable to recover it. 00:28:57.155 [2024-07-25 01:29:19.603046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.155 [2024-07-25 01:29:19.603060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.155 qpair failed and we were unable to recover it. 00:28:57.155 [2024-07-25 01:29:19.603546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.155 [2024-07-25 01:29:19.603560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.155 qpair failed and we were unable to recover it. 00:28:57.155 [2024-07-25 01:29:19.604019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.155 [2024-07-25 01:29:19.604033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.155 qpair failed and we were unable to recover it. 00:28:57.155 [2024-07-25 01:29:19.604447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.155 [2024-07-25 01:29:19.604461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.155 qpair failed and we were unable to recover it. 00:28:57.155 [2024-07-25 01:29:19.604855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.155 [2024-07-25 01:29:19.604869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.155 qpair failed and we were unable to recover it. 00:28:57.155 [2024-07-25 01:29:19.605352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.155 [2024-07-25 01:29:19.605367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.155 qpair failed and we were unable to recover it. 00:28:57.155 [2024-07-25 01:29:19.605854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.155 [2024-07-25 01:29:19.605868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.155 qpair failed and we were unable to recover it. 00:28:57.155 [2024-07-25 01:29:19.606232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.155 [2024-07-25 01:29:19.606247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.155 qpair failed and we were unable to recover it. 00:28:57.155 [2024-07-25 01:29:19.606871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.155 [2024-07-25 01:29:19.606885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.155 qpair failed and we were unable to recover it. 00:28:57.155 [2024-07-25 01:29:19.607284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.155 [2024-07-25 01:29:19.607299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.155 qpair failed and we were unable to recover it. 00:28:57.155 [2024-07-25 01:29:19.607726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.155 [2024-07-25 01:29:19.607739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.155 qpair failed and we were unable to recover it. 00:28:57.155 [2024-07-25 01:29:19.608201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.155 [2024-07-25 01:29:19.608215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.155 qpair failed and we were unable to recover it. 00:28:57.155 [2024-07-25 01:29:19.608426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.155 [2024-07-25 01:29:19.608439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.155 qpair failed and we were unable to recover it. 00:28:57.155 [2024-07-25 01:29:19.608859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.155 [2024-07-25 01:29:19.608873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.155 qpair failed and we were unable to recover it. 00:28:57.155 [2024-07-25 01:29:19.609268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.155 [2024-07-25 01:29:19.609283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.155 qpair failed and we were unable to recover it. 00:28:57.155 [2024-07-25 01:29:19.609501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.155 [2024-07-25 01:29:19.609514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.155 qpair failed and we were unable to recover it. 00:28:57.155 [2024-07-25 01:29:19.609976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.155 [2024-07-25 01:29:19.609990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.155 qpair failed and we were unable to recover it. 00:28:57.155 [2024-07-25 01:29:19.610337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.155 [2024-07-25 01:29:19.610351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.155 qpair failed and we were unable to recover it. 00:28:57.155 [2024-07-25 01:29:19.610812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.155 [2024-07-25 01:29:19.610825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.155 qpair failed and we were unable to recover it. 00:28:57.155 [2024-07-25 01:29:19.611297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.155 [2024-07-25 01:29:19.611311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.155 qpair failed and we were unable to recover it. 00:28:57.155 [2024-07-25 01:29:19.611775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.155 [2024-07-25 01:29:19.611789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.155 qpair failed and we were unable to recover it. 00:28:57.155 [2024-07-25 01:29:19.612202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.155 [2024-07-25 01:29:19.612216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.155 qpair failed and we were unable to recover it. 00:28:57.155 [2024-07-25 01:29:19.612702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.155 [2024-07-25 01:29:19.612716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.155 qpair failed and we were unable to recover it. 00:28:57.155 [2024-07-25 01:29:19.613127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.155 [2024-07-25 01:29:19.613141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.155 qpair failed and we were unable to recover it. 00:28:57.155 [2024-07-25 01:29:19.613625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.155 [2024-07-25 01:29:19.613638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.155 qpair failed and we were unable to recover it. 00:28:57.155 [2024-07-25 01:29:19.613984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.155 [2024-07-25 01:29:19.613997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.155 qpair failed and we were unable to recover it. 00:28:57.155 [2024-07-25 01:29:19.614399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.155 [2024-07-25 01:29:19.614416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.155 qpair failed and we were unable to recover it. 00:28:57.155 [2024-07-25 01:29:19.614625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.155 [2024-07-25 01:29:19.614639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.155 qpair failed and we were unable to recover it. 00:28:57.155 [2024-07-25 01:29:19.615064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.155 [2024-07-25 01:29:19.615078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.155 qpair failed and we were unable to recover it. 00:28:57.155 [2024-07-25 01:29:19.615544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.155 [2024-07-25 01:29:19.615557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.155 qpair failed and we were unable to recover it. 00:28:57.155 [2024-07-25 01:29:19.615966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.155 [2024-07-25 01:29:19.615980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.155 qpair failed and we were unable to recover it. 00:28:57.155 [2024-07-25 01:29:19.616416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.155 [2024-07-25 01:29:19.616431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.155 qpair failed and we were unable to recover it. 00:28:57.155 [2024-07-25 01:29:19.616903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.155 [2024-07-25 01:29:19.616917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.155 qpair failed and we were unable to recover it. 00:28:57.155 [2024-07-25 01:29:19.617439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.155 [2024-07-25 01:29:19.617454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.155 qpair failed and we were unable to recover it. 00:28:57.155 [2024-07-25 01:29:19.617984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.155 [2024-07-25 01:29:19.617998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.155 qpair failed and we were unable to recover it. 00:28:57.155 [2024-07-25 01:29:19.618433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.155 [2024-07-25 01:29:19.618447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.155 qpair failed and we were unable to recover it. 00:28:57.155 [2024-07-25 01:29:19.618880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.155 [2024-07-25 01:29:19.618893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.155 qpair failed and we were unable to recover it. 00:28:57.155 [2024-07-25 01:29:19.619324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.156 [2024-07-25 01:29:19.619338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.156 qpair failed and we were unable to recover it. 00:28:57.428 [2024-07-25 01:29:19.619798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.428 [2024-07-25 01:29:19.619812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.428 qpair failed and we were unable to recover it. 00:28:57.428 [2024-07-25 01:29:19.620217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.428 [2024-07-25 01:29:19.620233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.428 qpair failed and we were unable to recover it. 00:28:57.428 [2024-07-25 01:29:19.620642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.428 [2024-07-25 01:29:19.620656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.428 qpair failed and we were unable to recover it. 00:28:57.428 [2024-07-25 01:29:19.621189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.428 [2024-07-25 01:29:19.621204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.428 qpair failed and we were unable to recover it. 00:28:57.428 [2024-07-25 01:29:19.621501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.428 [2024-07-25 01:29:19.621515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.428 qpair failed and we were unable to recover it. 00:28:57.428 [2024-07-25 01:29:19.621933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.428 [2024-07-25 01:29:19.621946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.428 qpair failed and we were unable to recover it. 00:28:57.428 [2024-07-25 01:29:19.622354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.428 [2024-07-25 01:29:19.622368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.428 qpair failed and we were unable to recover it. 00:28:57.428 [2024-07-25 01:29:19.622879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.428 [2024-07-25 01:29:19.622893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.428 qpair failed and we were unable to recover it. 00:28:57.428 [2024-07-25 01:29:19.623480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.428 [2024-07-25 01:29:19.623494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.428 qpair failed and we were unable to recover it. 00:28:57.428 [2024-07-25 01:29:19.623957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.428 [2024-07-25 01:29:19.623970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.428 qpair failed and we were unable to recover it. 00:28:57.428 [2024-07-25 01:29:19.624479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.428 [2024-07-25 01:29:19.624493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.428 qpair failed and we were unable to recover it. 00:28:57.428 [2024-07-25 01:29:19.624892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.428 [2024-07-25 01:29:19.624905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.428 qpair failed and we were unable to recover it. 00:28:57.428 [2024-07-25 01:29:19.625389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.428 [2024-07-25 01:29:19.625404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.428 qpair failed and we were unable to recover it. 00:28:57.428 [2024-07-25 01:29:19.625894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.428 [2024-07-25 01:29:19.625908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.428 qpair failed and we were unable to recover it. 00:28:57.428 [2024-07-25 01:29:19.626267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.428 [2024-07-25 01:29:19.626281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83dc000b90 with addr=10.0.0.2, port=4420 00:28:57.428 qpair failed and we were unable to recover it. 00:28:57.428 [2024-07-25 01:29:19.626399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1384010 is same with the state(5) to be set 00:28:57.428 [2024-07-25 01:29:19.626738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.428 [2024-07-25 01:29:19.626768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.428 qpair failed and we were unable to recover it. 00:28:57.428 [2024-07-25 01:29:19.627236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.428 [2024-07-25 01:29:19.627248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.428 qpair failed and we were unable to recover it. 00:28:57.428 [2024-07-25 01:29:19.627602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.428 [2024-07-25 01:29:19.627613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.428 qpair failed and we were unable to recover it. 00:28:57.428 [2024-07-25 01:29:19.628033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.428 [2024-07-25 01:29:19.628048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.428 qpair failed and we were unable to recover it. 00:28:57.428 [2024-07-25 01:29:19.628455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.428 [2024-07-25 01:29:19.628465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.428 qpair failed and we were unable to recover it. 00:28:57.428 [2024-07-25 01:29:19.628869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.428 [2024-07-25 01:29:19.628879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.428 qpair failed and we were unable to recover it. 00:28:57.428 [2024-07-25 01:29:19.629345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.428 [2024-07-25 01:29:19.629355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.428 qpair failed and we were unable to recover it. 00:28:57.428 [2024-07-25 01:29:19.629791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.428 [2024-07-25 01:29:19.629801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.428 qpair failed and we were unable to recover it. 00:28:57.429 [2024-07-25 01:29:19.630089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.429 [2024-07-25 01:29:19.630099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.429 qpair failed and we were unable to recover it. 00:28:57.429 [2024-07-25 01:29:19.630508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.429 [2024-07-25 01:29:19.630518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.429 qpair failed and we were unable to recover it. 00:28:57.429 [2024-07-25 01:29:19.631032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.429 [2024-07-25 01:29:19.631047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.429 qpair failed and we were unable to recover it. 00:28:57.429 [2024-07-25 01:29:19.631450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.429 [2024-07-25 01:29:19.631460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.429 qpair failed and we were unable to recover it. 00:28:57.429 [2024-07-25 01:29:19.631817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.429 [2024-07-25 01:29:19.631827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.429 qpair failed and we were unable to recover it. 00:28:57.429 [2024-07-25 01:29:19.632178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.429 [2024-07-25 01:29:19.632188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.429 qpair failed and we were unable to recover it. 00:28:57.429 [2024-07-25 01:29:19.632591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.429 [2024-07-25 01:29:19.632601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.429 qpair failed and we were unable to recover it. 00:28:57.429 [2024-07-25 01:29:19.632928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.429 [2024-07-25 01:29:19.632938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.429 qpair failed and we were unable to recover it. 00:28:57.429 [2024-07-25 01:29:19.633433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.429 [2024-07-25 01:29:19.633443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.429 qpair failed and we were unable to recover it. 00:28:57.429 [2024-07-25 01:29:19.633872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.429 [2024-07-25 01:29:19.633882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.429 qpair failed and we were unable to recover it. 00:28:57.429 [2024-07-25 01:29:19.634224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.429 [2024-07-25 01:29:19.634234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.429 qpair failed and we were unable to recover it. 00:28:57.429 [2024-07-25 01:29:19.634690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.429 [2024-07-25 01:29:19.634700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.429 qpair failed and we were unable to recover it. 00:28:57.429 [2024-07-25 01:29:19.635099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.429 [2024-07-25 01:29:19.635109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.429 qpair failed and we were unable to recover it. 00:28:57.429 [2024-07-25 01:29:19.635534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.429 [2024-07-25 01:29:19.635544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.429 qpair failed and we were unable to recover it. 00:28:57.429 [2024-07-25 01:29:19.636021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.429 [2024-07-25 01:29:19.636031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.429 qpair failed and we were unable to recover it. 00:28:57.429 [2024-07-25 01:29:19.636488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.429 [2024-07-25 01:29:19.636498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.429 qpair failed and we were unable to recover it. 00:28:57.429 [2024-07-25 01:29:19.636888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.429 [2024-07-25 01:29:19.636898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.429 qpair failed and we were unable to recover it. 00:28:57.429 [2024-07-25 01:29:19.637352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.429 [2024-07-25 01:29:19.637362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.429 qpair failed and we were unable to recover it. 00:28:57.429 [2024-07-25 01:29:19.637853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.429 [2024-07-25 01:29:19.637865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.429 qpair failed and we were unable to recover it. 00:28:57.429 [2024-07-25 01:29:19.638291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.429 [2024-07-25 01:29:19.638301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.429 qpair failed and we were unable to recover it. 00:28:57.429 [2024-07-25 01:29:19.638731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.429 [2024-07-25 01:29:19.638741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.429 qpair failed and we were unable to recover it. 00:28:57.429 [2024-07-25 01:29:19.639193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.429 [2024-07-25 01:29:19.639203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.429 qpair failed and we were unable to recover it. 00:28:57.429 [2024-07-25 01:29:19.639550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.429 [2024-07-25 01:29:19.639560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.429 qpair failed and we were unable to recover it. 00:28:57.429 [2024-07-25 01:29:19.639956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.429 [2024-07-25 01:29:19.639966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.429 qpair failed and we were unable to recover it. 00:28:57.429 [2024-07-25 01:29:19.640448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.429 [2024-07-25 01:29:19.640458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.429 qpair failed and we were unable to recover it. 00:28:57.429 [2024-07-25 01:29:19.640959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.429 [2024-07-25 01:29:19.640969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.429 qpair failed and we were unable to recover it. 00:28:57.429 [2024-07-25 01:29:19.641424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.429 [2024-07-25 01:29:19.641434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.429 qpair failed and we were unable to recover it. 00:28:57.429 [2024-07-25 01:29:19.641786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.429 [2024-07-25 01:29:19.641795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.429 qpair failed and we were unable to recover it. 00:28:57.429 [2024-07-25 01:29:19.642202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.429 [2024-07-25 01:29:19.642212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.429 qpair failed and we were unable to recover it. 00:28:57.429 [2024-07-25 01:29:19.642627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.429 [2024-07-25 01:29:19.642636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.429 qpair failed and we were unable to recover it. 00:28:57.429 [2024-07-25 01:29:19.643049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.429 [2024-07-25 01:29:19.643060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.429 qpair failed and we were unable to recover it. 00:28:57.429 [2024-07-25 01:29:19.643480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.429 [2024-07-25 01:29:19.643490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.429 qpair failed and we were unable to recover it. 00:28:57.429 [2024-07-25 01:29:19.643906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.429 [2024-07-25 01:29:19.643916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.429 qpair failed and we were unable to recover it. 00:28:57.429 [2024-07-25 01:29:19.644379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.429 [2024-07-25 01:29:19.644389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.429 qpair failed and we were unable to recover it. 00:28:57.429 [2024-07-25 01:29:19.644796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.429 [2024-07-25 01:29:19.644806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.429 qpair failed and we were unable to recover it. 00:28:57.429 [2024-07-25 01:29:19.645056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.430 [2024-07-25 01:29:19.645066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.430 qpair failed and we were unable to recover it. 00:28:57.430 [2024-07-25 01:29:19.645411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.430 [2024-07-25 01:29:19.645421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.430 qpair failed and we were unable to recover it. 00:28:57.430 [2024-07-25 01:29:19.645577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.430 [2024-07-25 01:29:19.645586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.430 qpair failed and we were unable to recover it. 00:28:57.430 [2024-07-25 01:29:19.646002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.430 [2024-07-25 01:29:19.646011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.430 qpair failed and we were unable to recover it. 00:28:57.430 [2024-07-25 01:29:19.646441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.430 [2024-07-25 01:29:19.646451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.430 qpair failed and we were unable to recover it. 00:28:57.430 [2024-07-25 01:29:19.646791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.430 [2024-07-25 01:29:19.646801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.430 qpair failed and we were unable to recover it. 00:28:57.430 [2024-07-25 01:29:19.647202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.430 [2024-07-25 01:29:19.647212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.430 qpair failed and we were unable to recover it. 00:28:57.430 [2024-07-25 01:29:19.647646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.430 [2024-07-25 01:29:19.647655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.430 qpair failed and we were unable to recover it. 00:28:57.430 [2024-07-25 01:29:19.648134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.430 [2024-07-25 01:29:19.648144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.430 qpair failed and we were unable to recover it. 00:28:57.430 [2024-07-25 01:29:19.648531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.430 [2024-07-25 01:29:19.648541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.430 qpair failed and we were unable to recover it. 00:28:57.430 [2024-07-25 01:29:19.648998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.430 [2024-07-25 01:29:19.649008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.430 qpair failed and we were unable to recover it. 00:28:57.430 [2024-07-25 01:29:19.649472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.430 [2024-07-25 01:29:19.649483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.430 qpair failed and we were unable to recover it. 00:28:57.430 [2024-07-25 01:29:19.649833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.430 [2024-07-25 01:29:19.649843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.430 qpair failed and we were unable to recover it. 00:28:57.430 [2024-07-25 01:29:19.650437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.430 [2024-07-25 01:29:19.650448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.430 qpair failed and we were unable to recover it. 00:28:57.430 [2024-07-25 01:29:19.650964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.430 [2024-07-25 01:29:19.650974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.430 qpair failed and we were unable to recover it. 00:28:57.430 [2024-07-25 01:29:19.651428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.430 [2024-07-25 01:29:19.651438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.430 qpair failed and we were unable to recover it. 00:28:57.430 [2024-07-25 01:29:19.651783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.430 [2024-07-25 01:29:19.651793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.430 qpair failed and we were unable to recover it. 00:28:57.430 [2024-07-25 01:29:19.652203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.430 [2024-07-25 01:29:19.652213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.430 qpair failed and we were unable to recover it. 00:28:57.430 [2024-07-25 01:29:19.652625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.430 [2024-07-25 01:29:19.652635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.430 qpair failed and we were unable to recover it. 00:28:57.430 [2024-07-25 01:29:19.653066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.430 [2024-07-25 01:29:19.653076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.430 qpair failed and we were unable to recover it. 00:28:57.430 [2024-07-25 01:29:19.653475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.430 [2024-07-25 01:29:19.653484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.430 qpair failed and we were unable to recover it. 00:28:57.430 [2024-07-25 01:29:19.653912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.430 [2024-07-25 01:29:19.653921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.430 qpair failed and we were unable to recover it. 00:28:57.430 [2024-07-25 01:29:19.654411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.430 [2024-07-25 01:29:19.654422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.430 qpair failed and we were unable to recover it. 00:28:57.430 [2024-07-25 01:29:19.654895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.430 [2024-07-25 01:29:19.654911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.430 qpair failed and we were unable to recover it. 00:28:57.430 [2024-07-25 01:29:19.655413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.430 [2024-07-25 01:29:19.655423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.430 qpair failed and we were unable to recover it. 00:28:57.430 [2024-07-25 01:29:19.655923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.430 [2024-07-25 01:29:19.655933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.430 qpair failed and we were unable to recover it. 00:28:57.430 [2024-07-25 01:29:19.656352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.430 [2024-07-25 01:29:19.656362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.430 qpair failed and we were unable to recover it. 00:28:57.430 [2024-07-25 01:29:19.656729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.430 [2024-07-25 01:29:19.656738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.430 qpair failed and we were unable to recover it. 00:28:57.430 [2024-07-25 01:29:19.657092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.430 [2024-07-25 01:29:19.657102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.430 qpair failed and we were unable to recover it. 00:28:57.430 [2024-07-25 01:29:19.657529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.430 [2024-07-25 01:29:19.657539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.430 qpair failed and we were unable to recover it. 00:28:57.430 [2024-07-25 01:29:19.658017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.430 [2024-07-25 01:29:19.658027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.430 qpair failed and we were unable to recover it. 00:28:57.430 [2024-07-25 01:29:19.658485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.430 [2024-07-25 01:29:19.658495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.430 qpair failed and we were unable to recover it. 00:28:57.430 [2024-07-25 01:29:19.658948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.430 [2024-07-25 01:29:19.658958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.430 qpair failed and we were unable to recover it. 00:28:57.430 [2024-07-25 01:29:19.659436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.430 [2024-07-25 01:29:19.659447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.430 qpair failed and we were unable to recover it. 00:28:57.430 [2024-07-25 01:29:19.659897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.430 [2024-07-25 01:29:19.659907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.430 qpair failed and we were unable to recover it. 00:28:57.430 [2024-07-25 01:29:19.660387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.430 [2024-07-25 01:29:19.660397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.430 qpair failed and we were unable to recover it. 00:28:57.430 [2024-07-25 01:29:19.660813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.430 [2024-07-25 01:29:19.660823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.430 qpair failed and we were unable to recover it. 00:28:57.430 [2024-07-25 01:29:19.661213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.431 [2024-07-25 01:29:19.661224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.431 qpair failed and we were unable to recover it. 00:28:57.431 [2024-07-25 01:29:19.661648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.431 [2024-07-25 01:29:19.661658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.431 qpair failed and we were unable to recover it. 00:28:57.431 [2024-07-25 01:29:19.662070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.431 [2024-07-25 01:29:19.662080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.431 qpair failed and we were unable to recover it. 00:28:57.431 [2024-07-25 01:29:19.662534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.431 [2024-07-25 01:29:19.662544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.431 qpair failed and we were unable to recover it. 00:28:57.431 [2024-07-25 01:29:19.663021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.431 [2024-07-25 01:29:19.663031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.431 qpair failed and we were unable to recover it. 00:28:57.431 [2024-07-25 01:29:19.663486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.431 [2024-07-25 01:29:19.663497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.431 qpair failed and we were unable to recover it. 00:28:57.431 [2024-07-25 01:29:19.663699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.431 [2024-07-25 01:29:19.663709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.431 qpair failed and we were unable to recover it. 00:28:57.431 [2024-07-25 01:29:19.664227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.431 [2024-07-25 01:29:19.664237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.431 qpair failed and we were unable to recover it. 00:28:57.431 [2024-07-25 01:29:19.664573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.431 [2024-07-25 01:29:19.664583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.431 qpair failed and we were unable to recover it. 00:28:57.431 [2024-07-25 01:29:19.665086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.431 [2024-07-25 01:29:19.665096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.431 qpair failed and we were unable to recover it. 00:28:57.431 [2024-07-25 01:29:19.665487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.431 [2024-07-25 01:29:19.665497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.431 qpair failed and we were unable to recover it. 00:28:57.431 [2024-07-25 01:29:19.665973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.431 [2024-07-25 01:29:19.665983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.431 qpair failed and we were unable to recover it. 00:28:57.431 [2024-07-25 01:29:19.666219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.431 [2024-07-25 01:29:19.666229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.431 qpair failed and we were unable to recover it. 00:28:57.431 [2024-07-25 01:29:19.666581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.431 [2024-07-25 01:29:19.666591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.431 qpair failed and we were unable to recover it. 00:28:57.431 [2024-07-25 01:29:19.666925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.431 [2024-07-25 01:29:19.666934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.431 qpair failed and we were unable to recover it. 00:28:57.431 [2024-07-25 01:29:19.667408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.431 [2024-07-25 01:29:19.667418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.431 qpair failed and we were unable to recover it. 00:28:57.431 [2024-07-25 01:29:19.667874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.431 [2024-07-25 01:29:19.667884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.431 qpair failed and we were unable to recover it. 00:28:57.431 [2024-07-25 01:29:19.668288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.431 [2024-07-25 01:29:19.668298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.431 qpair failed and we were unable to recover it. 00:28:57.431 [2024-07-25 01:29:19.668753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.431 [2024-07-25 01:29:19.668763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.431 qpair failed and we were unable to recover it. 00:28:57.431 [2024-07-25 01:29:19.669099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.431 [2024-07-25 01:29:19.669109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.431 qpair failed and we were unable to recover it. 00:28:57.431 [2024-07-25 01:29:19.669583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.431 [2024-07-25 01:29:19.669593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.431 qpair failed and we were unable to recover it. 00:28:57.431 [2024-07-25 01:29:19.670021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.431 [2024-07-25 01:29:19.670031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.431 qpair failed and we were unable to recover it. 00:28:57.431 [2024-07-25 01:29:19.670510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.431 [2024-07-25 01:29:19.670520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.431 qpair failed and we were unable to recover it. 00:28:57.431 [2024-07-25 01:29:19.670974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.431 [2024-07-25 01:29:19.670984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.431 qpair failed and we were unable to recover it. 00:28:57.431 [2024-07-25 01:29:19.671385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.431 [2024-07-25 01:29:19.671395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.431 qpair failed and we were unable to recover it. 00:28:57.431 [2024-07-25 01:29:19.671812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.431 [2024-07-25 01:29:19.671822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.431 qpair failed and we were unable to recover it. 00:28:57.431 [2024-07-25 01:29:19.672236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.431 [2024-07-25 01:29:19.672248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.431 qpair failed and we were unable to recover it. 00:28:57.431 [2024-07-25 01:29:19.672722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.431 [2024-07-25 01:29:19.672731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.431 qpair failed and we were unable to recover it. 00:28:57.431 [2024-07-25 01:29:19.673155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.431 [2024-07-25 01:29:19.673166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.431 qpair failed and we were unable to recover it. 00:28:57.431 [2024-07-25 01:29:19.673619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.431 [2024-07-25 01:29:19.673629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.431 qpair failed and we were unable to recover it. 00:28:57.431 [2024-07-25 01:29:19.674034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.431 [2024-07-25 01:29:19.674047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.431 qpair failed and we were unable to recover it. 00:28:57.431 [2024-07-25 01:29:19.674526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.431 [2024-07-25 01:29:19.674536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.431 qpair failed and we were unable to recover it. 00:28:57.431 [2024-07-25 01:29:19.674950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.431 [2024-07-25 01:29:19.674960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.431 qpair failed and we were unable to recover it. 00:28:57.431 [2024-07-25 01:29:19.675360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.431 [2024-07-25 01:29:19.675370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.431 qpair failed and we were unable to recover it. 00:28:57.431 [2024-07-25 01:29:19.675769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.431 [2024-07-25 01:29:19.675779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.431 qpair failed and we were unable to recover it. 00:28:57.431 [2024-07-25 01:29:19.676268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.431 [2024-07-25 01:29:19.676279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.431 qpair failed and we were unable to recover it. 00:28:57.431 [2024-07-25 01:29:19.676679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.431 [2024-07-25 01:29:19.676689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.431 qpair failed and we were unable to recover it. 00:28:57.432 [2024-07-25 01:29:19.677085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.432 [2024-07-25 01:29:19.677095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.432 qpair failed and we were unable to recover it. 00:28:57.432 [2024-07-25 01:29:19.677500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.432 [2024-07-25 01:29:19.677510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.432 qpair failed and we were unable to recover it. 00:28:57.432 [2024-07-25 01:29:19.677916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.432 [2024-07-25 01:29:19.677926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.432 qpair failed and we were unable to recover it. 00:28:57.432 [2024-07-25 01:29:19.678383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.432 [2024-07-25 01:29:19.678393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.432 qpair failed and we were unable to recover it. 00:28:57.432 [2024-07-25 01:29:19.678818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.432 [2024-07-25 01:29:19.678828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.432 qpair failed and we were unable to recover it. 00:28:57.432 [2024-07-25 01:29:19.679031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.432 [2024-07-25 01:29:19.679040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.432 qpair failed and we were unable to recover it. 00:28:57.432 [2024-07-25 01:29:19.679468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.432 [2024-07-25 01:29:19.679478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.432 qpair failed and we were unable to recover it. 00:28:57.432 [2024-07-25 01:29:19.679888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.432 [2024-07-25 01:29:19.679897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.432 qpair failed and we were unable to recover it. 00:28:57.432 [2024-07-25 01:29:19.680233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.432 [2024-07-25 01:29:19.680243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.432 qpair failed and we were unable to recover it. 00:28:57.432 [2024-07-25 01:29:19.680716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.432 [2024-07-25 01:29:19.680726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.432 qpair failed and we were unable to recover it. 00:28:57.432 [2024-07-25 01:29:19.681206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.432 [2024-07-25 01:29:19.681217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.432 qpair failed and we were unable to recover it. 00:28:57.432 [2024-07-25 01:29:19.681694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.432 [2024-07-25 01:29:19.681703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.432 qpair failed and we were unable to recover it. 00:28:57.432 [2024-07-25 01:29:19.682157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.432 [2024-07-25 01:29:19.682168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.432 qpair failed and we were unable to recover it. 00:28:57.432 [2024-07-25 01:29:19.682567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.432 [2024-07-25 01:29:19.682577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.432 qpair failed and we were unable to recover it. 00:28:57.432 [2024-07-25 01:29:19.683028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.432 [2024-07-25 01:29:19.683038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.432 qpair failed and we were unable to recover it. 00:28:57.432 [2024-07-25 01:29:19.683514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.432 [2024-07-25 01:29:19.683524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.432 qpair failed and we were unable to recover it. 00:28:57.432 [2024-07-25 01:29:19.683925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.432 [2024-07-25 01:29:19.683935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.432 qpair failed and we were unable to recover it. 00:28:57.432 [2024-07-25 01:29:19.684384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.432 [2024-07-25 01:29:19.684395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.432 qpair failed and we were unable to recover it. 00:28:57.432 [2024-07-25 01:29:19.684870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.432 [2024-07-25 01:29:19.684880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.432 qpair failed and we were unable to recover it. 00:28:57.432 [2024-07-25 01:29:19.685229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.432 [2024-07-25 01:29:19.685240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.432 qpair failed and we were unable to recover it. 00:28:57.432 [2024-07-25 01:29:19.685643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.432 [2024-07-25 01:29:19.685652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.432 qpair failed and we were unable to recover it. 00:28:57.432 [2024-07-25 01:29:19.686058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.432 [2024-07-25 01:29:19.686068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.432 qpair failed and we were unable to recover it. 00:28:57.432 [2024-07-25 01:29:19.686520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.432 [2024-07-25 01:29:19.686530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.432 qpair failed and we were unable to recover it. 00:28:57.432 [2024-07-25 01:29:19.686932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.432 [2024-07-25 01:29:19.686942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.432 qpair failed and we were unable to recover it. 00:28:57.432 [2024-07-25 01:29:19.687343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.432 [2024-07-25 01:29:19.687353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.432 qpair failed and we were unable to recover it. 00:28:57.432 [2024-07-25 01:29:19.687779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.432 [2024-07-25 01:29:19.687789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.432 qpair failed and we were unable to recover it. 00:28:57.432 [2024-07-25 01:29:19.688242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.432 [2024-07-25 01:29:19.688252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.432 qpair failed and we were unable to recover it. 00:28:57.432 [2024-07-25 01:29:19.688652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.432 [2024-07-25 01:29:19.688662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.432 qpair failed and we were unable to recover it. 00:28:57.432 [2024-07-25 01:29:19.689137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.432 [2024-07-25 01:29:19.689147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.432 qpair failed and we were unable to recover it. 00:28:57.432 [2024-07-25 01:29:19.689486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.432 [2024-07-25 01:29:19.689498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.432 qpair failed and we were unable to recover it. 00:28:57.432 [2024-07-25 01:29:19.689964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.432 [2024-07-25 01:29:19.689973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.432 qpair failed and we were unable to recover it. 00:28:57.432 [2024-07-25 01:29:19.690194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.432 [2024-07-25 01:29:19.690204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.432 qpair failed and we were unable to recover it. 00:28:57.432 [2024-07-25 01:29:19.690685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.432 [2024-07-25 01:29:19.690695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.432 qpair failed and we were unable to recover it. 00:28:57.432 [2024-07-25 01:29:19.691103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.432 [2024-07-25 01:29:19.691113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.432 qpair failed and we were unable to recover it. 00:28:57.432 [2024-07-25 01:29:19.691512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.432 [2024-07-25 01:29:19.691522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.432 qpair failed and we were unable to recover it. 00:28:57.432 [2024-07-25 01:29:19.691910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.432 [2024-07-25 01:29:19.691919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.432 qpair failed and we were unable to recover it. 00:28:57.432 [2024-07-25 01:29:19.692379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.432 [2024-07-25 01:29:19.692389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.433 qpair failed and we were unable to recover it. 00:28:57.433 [2024-07-25 01:29:19.692797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.433 [2024-07-25 01:29:19.692807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.433 qpair failed and we were unable to recover it. 00:28:57.433 [2024-07-25 01:29:19.693286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.433 [2024-07-25 01:29:19.693296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.433 qpair failed and we were unable to recover it. 00:28:57.433 [2024-07-25 01:29:19.693702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.433 [2024-07-25 01:29:19.693711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.433 qpair failed and we were unable to recover it. 00:28:57.433 [2024-07-25 01:29:19.694058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.433 [2024-07-25 01:29:19.694068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.433 qpair failed and we were unable to recover it. 00:28:57.433 [2024-07-25 01:29:19.694519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.433 [2024-07-25 01:29:19.694529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.433 qpair failed and we were unable to recover it. 00:28:57.433 [2024-07-25 01:29:19.695010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.433 [2024-07-25 01:29:19.695020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.433 qpair failed and we were unable to recover it. 00:28:57.433 [2024-07-25 01:29:19.695502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.433 [2024-07-25 01:29:19.695512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.433 qpair failed and we were unable to recover it. 00:28:57.433 [2024-07-25 01:29:19.695913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.433 [2024-07-25 01:29:19.695923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.433 qpair failed and we were unable to recover it. 00:28:57.433 [2024-07-25 01:29:19.696211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.433 [2024-07-25 01:29:19.696220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.433 qpair failed and we were unable to recover it. 00:28:57.433 [2024-07-25 01:29:19.696619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.433 [2024-07-25 01:29:19.696629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.433 qpair failed and we were unable to recover it. 00:28:57.433 [2024-07-25 01:29:19.697100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.433 [2024-07-25 01:29:19.697111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.433 qpair failed and we were unable to recover it. 00:28:57.433 [2024-07-25 01:29:19.697588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.433 [2024-07-25 01:29:19.697598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.433 qpair failed and we were unable to recover it. 00:28:57.433 [2024-07-25 01:29:19.698004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.433 [2024-07-25 01:29:19.698014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.433 qpair failed and we were unable to recover it. 00:28:57.433 [2024-07-25 01:29:19.698467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.433 [2024-07-25 01:29:19.698477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.433 qpair failed and we were unable to recover it. 00:28:57.433 [2024-07-25 01:29:19.698873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.433 [2024-07-25 01:29:19.698883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.433 qpair failed and we were unable to recover it. 00:28:57.433 [2024-07-25 01:29:19.699287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.433 [2024-07-25 01:29:19.699298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.433 qpair failed and we were unable to recover it. 00:28:57.433 [2024-07-25 01:29:19.699750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.433 [2024-07-25 01:29:19.699759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.433 qpair failed and we were unable to recover it. 00:28:57.433 [2024-07-25 01:29:19.700188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.433 [2024-07-25 01:29:19.700198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.433 qpair failed and we were unable to recover it. 00:28:57.433 [2024-07-25 01:29:19.700653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.433 [2024-07-25 01:29:19.700663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.433 qpair failed and we were unable to recover it. 00:28:57.433 [2024-07-25 01:29:19.701145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.433 [2024-07-25 01:29:19.701156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.433 qpair failed and we were unable to recover it. 00:28:57.433 [2024-07-25 01:29:19.701506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.433 [2024-07-25 01:29:19.701516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.433 qpair failed and we were unable to recover it. 00:28:57.433 [2024-07-25 01:29:19.701905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.433 [2024-07-25 01:29:19.701915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.433 qpair failed and we were unable to recover it. 00:28:57.433 [2024-07-25 01:29:19.702334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.433 [2024-07-25 01:29:19.702344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.433 qpair failed and we were unable to recover it. 00:28:57.433 [2024-07-25 01:29:19.702692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.433 [2024-07-25 01:29:19.702702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.433 qpair failed and we were unable to recover it. 00:28:57.433 [2024-07-25 01:29:19.703095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.433 [2024-07-25 01:29:19.703105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.433 qpair failed and we were unable to recover it. 00:28:57.433 [2024-07-25 01:29:19.703525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.433 [2024-07-25 01:29:19.703535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.433 qpair failed and we were unable to recover it. 00:28:57.433 [2024-07-25 01:29:19.704013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.433 [2024-07-25 01:29:19.704023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.433 qpair failed and we were unable to recover it. 00:28:57.433 [2024-07-25 01:29:19.704426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.433 [2024-07-25 01:29:19.704436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.433 qpair failed and we were unable to recover it. 00:28:57.433 [2024-07-25 01:29:19.704838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.433 [2024-07-25 01:29:19.704848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.433 qpair failed and we were unable to recover it. 00:28:57.433 [2024-07-25 01:29:19.705303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.433 [2024-07-25 01:29:19.705313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.433 qpair failed and we were unable to recover it. 00:28:57.434 [2024-07-25 01:29:19.705787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.434 [2024-07-25 01:29:19.705797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.434 qpair failed and we were unable to recover it. 00:28:57.434 [2024-07-25 01:29:19.706204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.434 [2024-07-25 01:29:19.706214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.434 qpair failed and we were unable to recover it. 00:28:57.434 [2024-07-25 01:29:19.706692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.434 [2024-07-25 01:29:19.706704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.434 qpair failed and we were unable to recover it. 00:28:57.434 [2024-07-25 01:29:19.707158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.434 [2024-07-25 01:29:19.707168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.434 qpair failed and we were unable to recover it. 00:28:57.434 [2024-07-25 01:29:19.707644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.434 [2024-07-25 01:29:19.707654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.434 qpair failed and we were unable to recover it. 00:28:57.434 [2024-07-25 01:29:19.708061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.434 [2024-07-25 01:29:19.708071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.434 qpair failed and we were unable to recover it. 00:28:57.434 [2024-07-25 01:29:19.708421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.434 [2024-07-25 01:29:19.708430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.434 qpair failed and we were unable to recover it. 00:28:57.434 [2024-07-25 01:29:19.708902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.434 [2024-07-25 01:29:19.708911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.434 qpair failed and we were unable to recover it. 00:28:57.434 [2024-07-25 01:29:19.709332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.434 [2024-07-25 01:29:19.709343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.434 qpair failed and we were unable to recover it. 00:28:57.434 [2024-07-25 01:29:19.709819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.434 [2024-07-25 01:29:19.709828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.434 qpair failed and we were unable to recover it. 00:28:57.434 [2024-07-25 01:29:19.710188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.434 [2024-07-25 01:29:19.710198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.434 qpair failed and we were unable to recover it. 00:28:57.434 [2024-07-25 01:29:19.710651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.434 [2024-07-25 01:29:19.710661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.434 qpair failed and we were unable to recover it. 00:28:57.434 [2024-07-25 01:29:19.711134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.434 [2024-07-25 01:29:19.711144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.434 qpair failed and we were unable to recover it. 00:28:57.434 [2024-07-25 01:29:19.711491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.434 [2024-07-25 01:29:19.711501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.434 qpair failed and we were unable to recover it. 00:28:57.434 [2024-07-25 01:29:19.711886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.434 [2024-07-25 01:29:19.711895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.434 qpair failed and we were unable to recover it. 00:28:57.434 [2024-07-25 01:29:19.712297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.434 [2024-07-25 01:29:19.712307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.434 qpair failed and we were unable to recover it. 00:28:57.434 [2024-07-25 01:29:19.712814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.434 [2024-07-25 01:29:19.712823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.434 qpair failed and we were unable to recover it. 00:28:57.434 [2024-07-25 01:29:19.713209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.434 [2024-07-25 01:29:19.713220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.434 qpair failed and we were unable to recover it. 00:28:57.434 [2024-07-25 01:29:19.713622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.434 [2024-07-25 01:29:19.713631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.434 qpair failed and we were unable to recover it. 00:28:57.434 [2024-07-25 01:29:19.714051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.434 [2024-07-25 01:29:19.714062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.434 qpair failed and we were unable to recover it. 00:28:57.434 [2024-07-25 01:29:19.714419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.434 [2024-07-25 01:29:19.714429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.434 qpair failed and we were unable to recover it. 00:28:57.434 [2024-07-25 01:29:19.714658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.434 [2024-07-25 01:29:19.714667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.434 qpair failed and we were unable to recover it. 00:28:57.434 [2024-07-25 01:29:19.715069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.434 [2024-07-25 01:29:19.715079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.434 qpair failed and we were unable to recover it. 00:28:57.434 [2024-07-25 01:29:19.715574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.434 [2024-07-25 01:29:19.715583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.434 qpair failed and we were unable to recover it. 00:28:57.434 [2024-07-25 01:29:19.716086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.434 [2024-07-25 01:29:19.716096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.434 qpair failed and we were unable to recover it. 00:28:57.434 [2024-07-25 01:29:19.716546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.434 [2024-07-25 01:29:19.716556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.434 qpair failed and we were unable to recover it. 00:28:57.434 [2024-07-25 01:29:19.716897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.434 [2024-07-25 01:29:19.716907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.434 qpair failed and we were unable to recover it. 00:28:57.434 [2024-07-25 01:29:19.717381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.434 [2024-07-25 01:29:19.717391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.434 qpair failed and we were unable to recover it. 00:28:57.434 [2024-07-25 01:29:19.717793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.434 [2024-07-25 01:29:19.717803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.434 qpair failed and we were unable to recover it. 00:28:57.434 [2024-07-25 01:29:19.718205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.434 [2024-07-25 01:29:19.718215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.434 qpair failed and we were unable to recover it. 00:28:57.434 [2024-07-25 01:29:19.718631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.434 [2024-07-25 01:29:19.718640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.434 qpair failed and we were unable to recover it. 00:28:57.434 [2024-07-25 01:29:19.719095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.434 [2024-07-25 01:29:19.719104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.435 qpair failed and we were unable to recover it. 00:28:57.435 [2024-07-25 01:29:19.719556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.435 [2024-07-25 01:29:19.719566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.435 qpair failed and we were unable to recover it. 00:28:57.435 [2024-07-25 01:29:19.719714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.435 [2024-07-25 01:29:19.719724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.435 qpair failed and we were unable to recover it. 00:28:57.435 [2024-07-25 01:29:19.720126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.435 [2024-07-25 01:29:19.720136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.435 qpair failed and we were unable to recover it. 00:28:57.435 [2024-07-25 01:29:19.720582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.435 [2024-07-25 01:29:19.720591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.435 qpair failed and we were unable to recover it. 00:28:57.435 [2024-07-25 01:29:19.721062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.435 [2024-07-25 01:29:19.721072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.435 qpair failed and we were unable to recover it. 00:28:57.435 [2024-07-25 01:29:19.721498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.435 [2024-07-25 01:29:19.721508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.435 qpair failed and we were unable to recover it. 00:28:57.435 [2024-07-25 01:29:19.721643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.435 [2024-07-25 01:29:19.721653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.435 qpair failed and we were unable to recover it. 00:28:57.435 [2024-07-25 01:29:19.722130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.435 [2024-07-25 01:29:19.722140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.435 qpair failed and we were unable to recover it. 00:28:57.435 [2024-07-25 01:29:19.722539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.435 [2024-07-25 01:29:19.722548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.435 qpair failed and we were unable to recover it. 00:28:57.435 [2024-07-25 01:29:19.722971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.435 [2024-07-25 01:29:19.722981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.435 qpair failed and we were unable to recover it. 00:28:57.435 [2024-07-25 01:29:19.723457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.435 [2024-07-25 01:29:19.723468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.435 qpair failed and we were unable to recover it. 00:28:57.435 [2024-07-25 01:29:19.723867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.435 [2024-07-25 01:29:19.723876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.435 qpair failed and we were unable to recover it. 00:28:57.435 [2024-07-25 01:29:19.724277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.435 [2024-07-25 01:29:19.724287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.435 qpair failed and we were unable to recover it. 00:28:57.435 [2024-07-25 01:29:19.724687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.435 [2024-07-25 01:29:19.724697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.435 qpair failed and we were unable to recover it. 00:28:57.435 [2024-07-25 01:29:19.725148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.435 [2024-07-25 01:29:19.725158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.435 qpair failed and we were unable to recover it. 00:28:57.435 [2024-07-25 01:29:19.725610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.435 [2024-07-25 01:29:19.725620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.435 qpair failed and we were unable to recover it. 00:28:57.435 [2024-07-25 01:29:19.726073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.435 [2024-07-25 01:29:19.726084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.435 qpair failed and we were unable to recover it. 00:28:57.435 [2024-07-25 01:29:19.726499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.435 [2024-07-25 01:29:19.726509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.435 qpair failed and we were unable to recover it. 00:28:57.435 [2024-07-25 01:29:19.726908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.435 [2024-07-25 01:29:19.726918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.435 qpair failed and we were unable to recover it. 00:28:57.435 [2024-07-25 01:29:19.727394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.435 [2024-07-25 01:29:19.727404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.435 qpair failed and we were unable to recover it. 00:28:57.435 [2024-07-25 01:29:19.727854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.435 [2024-07-25 01:29:19.727864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.435 qpair failed and we were unable to recover it. 00:28:57.435 [2024-07-25 01:29:19.728275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.435 [2024-07-25 01:29:19.728291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.435 qpair failed and we were unable to recover it. 00:28:57.435 [2024-07-25 01:29:19.728630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.435 [2024-07-25 01:29:19.728640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.435 qpair failed and we were unable to recover it. 00:28:57.435 [2024-07-25 01:29:19.729118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.435 [2024-07-25 01:29:19.729128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.435 qpair failed and we were unable to recover it. 00:28:57.435 [2024-07-25 01:29:19.729599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.435 [2024-07-25 01:29:19.729609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.435 qpair failed and we were unable to recover it. 00:28:57.435 [2024-07-25 01:29:19.730008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.435 [2024-07-25 01:29:19.730018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.435 qpair failed and we were unable to recover it. 00:28:57.435 [2024-07-25 01:29:19.730470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.436 [2024-07-25 01:29:19.730481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.436 qpair failed and we were unable to recover it. 00:28:57.436 [2024-07-25 01:29:19.730958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.436 [2024-07-25 01:29:19.730968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.436 qpair failed and we were unable to recover it. 00:28:57.436 [2024-07-25 01:29:19.731424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.436 [2024-07-25 01:29:19.731434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.436 qpair failed and we were unable to recover it. 00:28:57.436 [2024-07-25 01:29:19.731904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.436 [2024-07-25 01:29:19.731914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.436 qpair failed and we were unable to recover it. 00:28:57.436 [2024-07-25 01:29:19.732392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.436 [2024-07-25 01:29:19.732403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.436 qpair failed and we were unable to recover it. 00:28:57.436 [2024-07-25 01:29:19.732807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.436 [2024-07-25 01:29:19.732816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.436 qpair failed and we were unable to recover it. 00:28:57.436 [2024-07-25 01:29:19.733289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.436 [2024-07-25 01:29:19.733300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.436 qpair failed and we were unable to recover it. 00:28:57.436 [2024-07-25 01:29:19.733718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.436 [2024-07-25 01:29:19.733728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.436 qpair failed and we were unable to recover it. 00:28:57.436 [2024-07-25 01:29:19.734154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.436 [2024-07-25 01:29:19.734164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.436 qpair failed and we were unable to recover it. 00:28:57.436 [2024-07-25 01:29:19.734616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.436 [2024-07-25 01:29:19.734626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.436 qpair failed and we were unable to recover it. 00:28:57.436 [2024-07-25 01:29:19.735033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.436 [2024-07-25 01:29:19.735046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.436 qpair failed and we were unable to recover it. 00:28:57.436 [2024-07-25 01:29:19.735501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.436 [2024-07-25 01:29:19.735511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.436 qpair failed and we were unable to recover it. 00:28:57.436 [2024-07-25 01:29:19.735966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.436 [2024-07-25 01:29:19.735976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.436 qpair failed and we were unable to recover it. 00:28:57.436 [2024-07-25 01:29:19.736387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.436 [2024-07-25 01:29:19.736397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.436 qpair failed and we were unable to recover it. 00:28:57.436 [2024-07-25 01:29:19.736820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.436 [2024-07-25 01:29:19.736830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.436 qpair failed and we were unable to recover it. 00:28:57.436 [2024-07-25 01:29:19.737305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.436 [2024-07-25 01:29:19.737315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.436 qpair failed and we were unable to recover it. 00:28:57.436 [2024-07-25 01:29:19.737769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.436 [2024-07-25 01:29:19.737779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.436 qpair failed and we were unable to recover it. 00:28:57.436 [2024-07-25 01:29:19.738021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.436 [2024-07-25 01:29:19.738031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.436 qpair failed and we were unable to recover it. 00:28:57.436 [2024-07-25 01:29:19.738436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.436 [2024-07-25 01:29:19.738446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.436 qpair failed and we were unable to recover it. 00:28:57.436 [2024-07-25 01:29:19.738919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.436 [2024-07-25 01:29:19.738929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.436 qpair failed and we were unable to recover it. 00:28:57.436 [2024-07-25 01:29:19.739283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.436 [2024-07-25 01:29:19.739293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.436 qpair failed and we were unable to recover it. 00:28:57.436 [2024-07-25 01:29:19.739639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.436 [2024-07-25 01:29:19.739649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.436 qpair failed and we were unable to recover it. 00:28:57.436 [2024-07-25 01:29:19.740071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.436 [2024-07-25 01:29:19.740082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.436 qpair failed and we were unable to recover it. 00:28:57.436 [2024-07-25 01:29:19.740561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.436 [2024-07-25 01:29:19.740571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.436 qpair failed and we were unable to recover it. 00:28:57.436 [2024-07-25 01:29:19.741048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.436 [2024-07-25 01:29:19.741062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.436 qpair failed and we were unable to recover it. 00:28:57.436 [2024-07-25 01:29:19.741414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.436 [2024-07-25 01:29:19.741424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.436 qpair failed and we were unable to recover it. 00:28:57.436 [2024-07-25 01:29:19.741825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.436 [2024-07-25 01:29:19.741835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.437 qpair failed and we were unable to recover it. 00:28:57.437 [2024-07-25 01:29:19.742309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.437 [2024-07-25 01:29:19.742319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.437 qpair failed and we were unable to recover it. 00:28:57.437 [2024-07-25 01:29:19.742771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.437 [2024-07-25 01:29:19.742781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.437 qpair failed and we were unable to recover it. 00:28:57.437 [2024-07-25 01:29:19.743236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.437 [2024-07-25 01:29:19.743247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.437 qpair failed and we were unable to recover it. 00:28:57.437 [2024-07-25 01:29:19.743717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.437 [2024-07-25 01:29:19.743727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.437 qpair failed and we were unable to recover it. 00:28:57.437 [2024-07-25 01:29:19.743966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.437 [2024-07-25 01:29:19.743975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.437 qpair failed and we were unable to recover it. 00:28:57.437 [2024-07-25 01:29:19.744370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.437 [2024-07-25 01:29:19.744380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.437 qpair failed and we were unable to recover it. 00:28:57.437 [2024-07-25 01:29:19.744850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.437 [2024-07-25 01:29:19.744860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.437 qpair failed and we were unable to recover it. 00:28:57.437 [2024-07-25 01:29:19.745314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.437 [2024-07-25 01:29:19.745326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.437 qpair failed and we were unable to recover it. 00:28:57.437 [2024-07-25 01:29:19.745726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.437 [2024-07-25 01:29:19.745736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.437 qpair failed and we were unable to recover it. 00:28:57.437 [2024-07-25 01:29:19.746187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.437 [2024-07-25 01:29:19.746197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.437 qpair failed and we were unable to recover it. 00:28:57.437 [2024-07-25 01:29:19.746534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.437 [2024-07-25 01:29:19.746544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.437 qpair failed and we were unable to recover it. 00:28:57.437 [2024-07-25 01:29:19.746959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.437 [2024-07-25 01:29:19.746969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.437 qpair failed and we were unable to recover it. 00:28:57.437 [2024-07-25 01:29:19.747399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.437 [2024-07-25 01:29:19.747409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.437 qpair failed and we were unable to recover it. 00:28:57.437 [2024-07-25 01:29:19.747886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.437 [2024-07-25 01:29:19.747895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.437 qpair failed and we were unable to recover it. 00:28:57.437 [2024-07-25 01:29:19.748370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.437 [2024-07-25 01:29:19.748380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.437 qpair failed and we were unable to recover it. 00:28:57.437 [2024-07-25 01:29:19.748806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.437 [2024-07-25 01:29:19.748815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.437 qpair failed and we were unable to recover it. 00:28:57.437 [2024-07-25 01:29:19.749216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.437 [2024-07-25 01:29:19.749226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.437 qpair failed and we were unable to recover it. 00:28:57.437 [2024-07-25 01:29:19.749652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.437 [2024-07-25 01:29:19.749661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.437 qpair failed and we were unable to recover it. 00:28:57.437 [2024-07-25 01:29:19.750140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.437 [2024-07-25 01:29:19.750150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.437 qpair failed and we were unable to recover it. 00:28:57.437 [2024-07-25 01:29:19.750551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.437 [2024-07-25 01:29:19.750561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.437 qpair failed and we were unable to recover it. 00:28:57.437 [2024-07-25 01:29:19.750948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.437 [2024-07-25 01:29:19.750957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.437 qpair failed and we were unable to recover it. 00:28:57.437 [2024-07-25 01:29:19.751432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.437 [2024-07-25 01:29:19.751442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.437 qpair failed and we were unable to recover it. 00:28:57.437 [2024-07-25 01:29:19.751826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.437 [2024-07-25 01:29:19.751836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.437 qpair failed and we were unable to recover it. 00:28:57.437 [2024-07-25 01:29:19.752177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.437 [2024-07-25 01:29:19.752187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.437 qpair failed and we were unable to recover it. 00:28:57.437 [2024-07-25 01:29:19.752650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.437 [2024-07-25 01:29:19.752660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.437 qpair failed and we were unable to recover it. 00:28:57.437 [2024-07-25 01:29:19.753112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.437 [2024-07-25 01:29:19.753123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.437 qpair failed and we were unable to recover it. 00:28:57.438 [2024-07-25 01:29:19.753458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.438 [2024-07-25 01:29:19.753468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.438 qpair failed and we were unable to recover it. 00:28:57.438 [2024-07-25 01:29:19.753856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.438 [2024-07-25 01:29:19.753865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.438 qpair failed and we were unable to recover it. 00:28:57.438 [2024-07-25 01:29:19.754341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.438 [2024-07-25 01:29:19.754351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.438 qpair failed and we were unable to recover it. 00:28:57.438 [2024-07-25 01:29:19.754741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.438 [2024-07-25 01:29:19.754751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.438 qpair failed and we were unable to recover it. 00:28:57.438 [2024-07-25 01:29:19.755212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.438 [2024-07-25 01:29:19.755222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.438 qpair failed and we were unable to recover it. 00:28:57.438 [2024-07-25 01:29:19.755624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.438 [2024-07-25 01:29:19.755634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.438 qpair failed and we were unable to recover it. 00:28:57.438 [2024-07-25 01:29:19.756098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.438 [2024-07-25 01:29:19.756108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.438 qpair failed and we were unable to recover it. 00:28:57.438 [2024-07-25 01:29:19.756580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.438 [2024-07-25 01:29:19.756590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.438 qpair failed and we were unable to recover it. 00:28:57.438 [2024-07-25 01:29:19.757066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.438 [2024-07-25 01:29:19.757075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.438 qpair failed and we were unable to recover it. 00:28:57.438 [2024-07-25 01:29:19.757480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.438 [2024-07-25 01:29:19.757490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.438 qpair failed and we were unable to recover it. 00:28:57.438 [2024-07-25 01:29:19.757966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.438 [2024-07-25 01:29:19.757975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.438 qpair failed and we were unable to recover it. 00:28:57.438 [2024-07-25 01:29:19.758199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.438 [2024-07-25 01:29:19.758211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.438 qpair failed and we were unable to recover it. 00:28:57.438 [2024-07-25 01:29:19.758622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.438 [2024-07-25 01:29:19.758631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.438 qpair failed and we were unable to recover it. 00:28:57.438 [2024-07-25 01:29:19.759017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.438 [2024-07-25 01:29:19.759027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.438 qpair failed and we were unable to recover it. 00:28:57.438 [2024-07-25 01:29:19.759438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.438 [2024-07-25 01:29:19.759448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.438 qpair failed and we were unable to recover it. 00:28:57.438 [2024-07-25 01:29:19.759790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.438 [2024-07-25 01:29:19.759800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.438 qpair failed and we were unable to recover it. 00:28:57.438 [2024-07-25 01:29:19.760274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.438 [2024-07-25 01:29:19.760284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.438 qpair failed and we were unable to recover it. 00:28:57.438 [2024-07-25 01:29:19.760737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.438 [2024-07-25 01:29:19.760746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.438 qpair failed and we were unable to recover it. 00:28:57.438 [2024-07-25 01:29:19.761199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.438 [2024-07-25 01:29:19.761209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.438 qpair failed and we were unable to recover it. 00:28:57.438 [2024-07-25 01:29:19.761660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.438 [2024-07-25 01:29:19.761670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.438 qpair failed and we were unable to recover it. 00:28:57.438 [2024-07-25 01:29:19.762146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.438 [2024-07-25 01:29:19.762156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.438 qpair failed and we were unable to recover it. 00:28:57.438 [2024-07-25 01:29:19.762611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.438 [2024-07-25 01:29:19.762621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.438 qpair failed and we were unable to recover it. 00:28:57.438 [2024-07-25 01:29:19.763075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.438 [2024-07-25 01:29:19.763085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.438 qpair failed and we were unable to recover it. 00:28:57.438 [2024-07-25 01:29:19.763491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.438 [2024-07-25 01:29:19.763501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.438 qpair failed and we were unable to recover it. 00:28:57.438 [2024-07-25 01:29:19.763889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.438 [2024-07-25 01:29:19.763898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.438 qpair failed and we were unable to recover it. 00:28:57.438 [2024-07-25 01:29:19.764377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.438 [2024-07-25 01:29:19.764388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.438 qpair failed and we were unable to recover it. 00:28:57.438 [2024-07-25 01:29:19.764840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.438 [2024-07-25 01:29:19.764850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.438 qpair failed and we were unable to recover it. 00:28:57.438 [2024-07-25 01:29:19.765325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.438 [2024-07-25 01:29:19.765336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.438 qpair failed and we were unable to recover it. 00:28:57.438 [2024-07-25 01:29:19.765789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.438 [2024-07-25 01:29:19.765799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.438 qpair failed and we were unable to recover it. 00:28:57.438 [2024-07-25 01:29:19.766218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.438 [2024-07-25 01:29:19.766228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.438 qpair failed and we were unable to recover it. 00:28:57.438 [2024-07-25 01:29:19.766632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.438 [2024-07-25 01:29:19.766641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.438 qpair failed and we were unable to recover it. 00:28:57.439 [2024-07-25 01:29:19.767119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-07-25 01:29:19.767129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-07-25 01:29:19.767608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-07-25 01:29:19.767618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-07-25 01:29:19.768119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-07-25 01:29:19.768129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-07-25 01:29:19.768582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-07-25 01:29:19.768592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-07-25 01:29:19.769064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-07-25 01:29:19.769074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-07-25 01:29:19.769483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-07-25 01:29:19.769493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-07-25 01:29:19.769911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-07-25 01:29:19.769921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-07-25 01:29:19.770397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-07-25 01:29:19.770408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-07-25 01:29:19.770888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-07-25 01:29:19.770897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-07-25 01:29:19.771284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-07-25 01:29:19.771294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-07-25 01:29:19.771639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-07-25 01:29:19.771649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-07-25 01:29:19.772128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-07-25 01:29:19.772138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-07-25 01:29:19.772539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-07-25 01:29:19.772549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-07-25 01:29:19.772999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-07-25 01:29:19.773009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-07-25 01:29:19.773459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-07-25 01:29:19.773469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-07-25 01:29:19.773866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-07-25 01:29:19.773876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-07-25 01:29:19.774277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-07-25 01:29:19.774287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-07-25 01:29:19.774489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-07-25 01:29:19.774499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-07-25 01:29:19.774905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-07-25 01:29:19.774915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-07-25 01:29:19.775366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-07-25 01:29:19.775376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-07-25 01:29:19.775830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-07-25 01:29:19.775842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-07-25 01:29:19.776328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-07-25 01:29:19.776338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-07-25 01:29:19.776691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-07-25 01:29:19.776701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-07-25 01:29:19.777123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-07-25 01:29:19.777132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-07-25 01:29:19.777555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-07-25 01:29:19.777565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-07-25 01:29:19.777974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-07-25 01:29:19.777984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-07-25 01:29:19.778379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-07-25 01:29:19.778390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-07-25 01:29:19.778840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-07-25 01:29:19.778850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-07-25 01:29:19.779348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-07-25 01:29:19.779358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-07-25 01:29:19.779495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-07-25 01:29:19.779504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-07-25 01:29:19.779915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-07-25 01:29:19.779925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-07-25 01:29:19.780344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-07-25 01:29:19.780354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-07-25 01:29:19.780748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-07-25 01:29:19.780758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-07-25 01:29:19.781234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-07-25 01:29:19.781244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-07-25 01:29:19.781647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-07-25 01:29:19.781657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-07-25 01:29:19.782000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-07-25 01:29:19.782011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-07-25 01:29:19.782485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-07-25 01:29:19.782495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-07-25 01:29:19.782843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-07-25 01:29:19.782853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-07-25 01:29:19.783280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-07-25 01:29:19.783290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-07-25 01:29:19.783637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-07-25 01:29:19.783646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-07-25 01:29:19.783940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-07-25 01:29:19.783950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-07-25 01:29:19.784301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-07-25 01:29:19.784311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-07-25 01:29:19.784784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-07-25 01:29:19.784794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-07-25 01:29:19.785181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-07-25 01:29:19.785191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-07-25 01:29:19.785598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-07-25 01:29:19.785608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-07-25 01:29:19.786084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-07-25 01:29:19.786094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-07-25 01:29:19.786517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-07-25 01:29:19.786527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-07-25 01:29:19.786924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-07-25 01:29:19.786934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-07-25 01:29:19.787412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-07-25 01:29:19.787422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-07-25 01:29:19.787808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-07-25 01:29:19.787818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-07-25 01:29:19.788229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-07-25 01:29:19.788239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-07-25 01:29:19.788641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-07-25 01:29:19.788651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-07-25 01:29:19.789126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-07-25 01:29:19.789136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-07-25 01:29:19.789557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-07-25 01:29:19.789567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-07-25 01:29:19.789952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-07-25 01:29:19.789962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-07-25 01:29:19.790436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-07-25 01:29:19.790446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-07-25 01:29:19.790855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-07-25 01:29:19.790865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-07-25 01:29:19.791261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-07-25 01:29:19.791271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-07-25 01:29:19.791722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-07-25 01:29:19.791732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-07-25 01:29:19.792064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-07-25 01:29:19.792074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-07-25 01:29:19.792494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-07-25 01:29:19.792506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-07-25 01:29:19.792981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-07-25 01:29:19.792990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-07-25 01:29:19.793390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-07-25 01:29:19.793400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-07-25 01:29:19.793853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-07-25 01:29:19.793863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-07-25 01:29:19.794334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-07-25 01:29:19.794345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-07-25 01:29:19.794682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-07-25 01:29:19.794692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-07-25 01:29:19.795169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-07-25 01:29:19.795179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-07-25 01:29:19.795629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-07-25 01:29:19.795639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-07-25 01:29:19.796045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-07-25 01:29:19.796055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-07-25 01:29:19.796505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-07-25 01:29:19.796515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-07-25 01:29:19.796762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-07-25 01:29:19.796772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-07-25 01:29:19.797252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-07-25 01:29:19.797262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-07-25 01:29:19.797716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-07-25 01:29:19.797726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-07-25 01:29:19.798130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-07-25 01:29:19.798140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-07-25 01:29:19.798618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-07-25 01:29:19.798628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-07-25 01:29:19.799057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-07-25 01:29:19.799067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-07-25 01:29:19.799484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-07-25 01:29:19.799494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-07-25 01:29:19.799915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-07-25 01:29:19.799925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-07-25 01:29:19.800398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-07-25 01:29:19.800408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-07-25 01:29:19.800818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-07-25 01:29:19.800828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-07-25 01:29:19.801304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-07-25 01:29:19.801314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-07-25 01:29:19.801740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-07-25 01:29:19.801750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-07-25 01:29:19.802153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-07-25 01:29:19.802163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-07-25 01:29:19.802641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-07-25 01:29:19.802651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-07-25 01:29:19.803053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-07-25 01:29:19.803063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-07-25 01:29:19.803473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-07-25 01:29:19.803483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-07-25 01:29:19.803959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-07-25 01:29:19.803968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-07-25 01:29:19.804449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-07-25 01:29:19.804459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-07-25 01:29:19.804858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-07-25 01:29:19.804868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-07-25 01:29:19.805345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-07-25 01:29:19.805355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-07-25 01:29:19.805739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-07-25 01:29:19.805749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-07-25 01:29:19.806152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-07-25 01:29:19.806163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-07-25 01:29:19.806681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-07-25 01:29:19.806691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-07-25 01:29:19.807092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-07-25 01:29:19.807102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-07-25 01:29:19.807500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-07-25 01:29:19.807510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-07-25 01:29:19.807866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-07-25 01:29:19.807876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-07-25 01:29:19.808355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-07-25 01:29:19.808365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-07-25 01:29:19.808759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-07-25 01:29:19.808769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-07-25 01:29:19.809004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-07-25 01:29:19.809015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-07-25 01:29:19.809484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-07-25 01:29:19.809494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-07-25 01:29:19.809881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-07-25 01:29:19.809893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-07-25 01:29:19.810351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-07-25 01:29:19.810361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-07-25 01:29:19.810711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-07-25 01:29:19.810721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-07-25 01:29:19.811126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-07-25 01:29:19.811137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-07-25 01:29:19.811505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-07-25 01:29:19.811515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-07-25 01:29:19.811990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-07-25 01:29:19.812000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-07-25 01:29:19.812471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-07-25 01:29:19.812481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-07-25 01:29:19.812836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-07-25 01:29:19.812846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-07-25 01:29:19.813242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-07-25 01:29:19.813253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-07-25 01:29:19.813653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-07-25 01:29:19.813663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-07-25 01:29:19.814003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-07-25 01:29:19.814013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-07-25 01:29:19.814413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-07-25 01:29:19.814424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-07-25 01:29:19.814829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-07-25 01:29:19.814839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-07-25 01:29:19.815225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-07-25 01:29:19.815235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-07-25 01:29:19.815580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-07-25 01:29:19.815589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-07-25 01:29:19.816016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-07-25 01:29:19.816026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-07-25 01:29:19.816509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-07-25 01:29:19.816519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-07-25 01:29:19.816915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-07-25 01:29:19.816925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-07-25 01:29:19.817283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-07-25 01:29:19.817293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-07-25 01:29:19.817694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-07-25 01:29:19.817704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-07-25 01:29:19.817991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-07-25 01:29:19.818001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-07-25 01:29:19.818419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-07-25 01:29:19.818429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-07-25 01:29:19.818922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-07-25 01:29:19.818932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-07-25 01:29:19.819417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-07-25 01:29:19.819427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-07-25 01:29:19.819874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-07-25 01:29:19.819884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-07-25 01:29:19.820274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-07-25 01:29:19.820284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-07-25 01:29:19.820710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-07-25 01:29:19.820720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-07-25 01:29:19.821129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-07-25 01:29:19.821140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-07-25 01:29:19.821604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-07-25 01:29:19.821614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-07-25 01:29:19.822088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-07-25 01:29:19.822099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-07-25 01:29:19.822554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-07-25 01:29:19.822564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-07-25 01:29:19.823016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-07-25 01:29:19.823027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-07-25 01:29:19.823364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-07-25 01:29:19.823375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-07-25 01:29:19.823846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-07-25 01:29:19.823856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-07-25 01:29:19.824272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-07-25 01:29:19.824282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-07-25 01:29:19.824631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-07-25 01:29:19.824641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-07-25 01:29:19.825046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-07-25 01:29:19.825057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-07-25 01:29:19.825260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-07-25 01:29:19.825270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-07-25 01:29:19.825681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-07-25 01:29:19.825691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-07-25 01:29:19.826080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-07-25 01:29:19.826091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-07-25 01:29:19.826580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-07-25 01:29:19.826593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-07-25 01:29:19.827005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-07-25 01:29:19.827015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-07-25 01:29:19.827169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-07-25 01:29:19.827180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-07-25 01:29:19.827591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-07-25 01:29:19.827600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-07-25 01:29:19.827989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-07-25 01:29:19.827998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-07-25 01:29:19.828421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-07-25 01:29:19.828430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-07-25 01:29:19.828883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-07-25 01:29:19.828894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-07-25 01:29:19.829318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-07-25 01:29:19.829328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-07-25 01:29:19.829719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-07-25 01:29:19.829729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-07-25 01:29:19.830183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-07-25 01:29:19.830194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-07-25 01:29:19.830585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-07-25 01:29:19.830595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-07-25 01:29:19.831001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-07-25 01:29:19.831011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-07-25 01:29:19.831433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-07-25 01:29:19.831444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-07-25 01:29:19.831854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-07-25 01:29:19.831864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-07-25 01:29:19.832376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-07-25 01:29:19.832387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-07-25 01:29:19.832780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-07-25 01:29:19.832791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-07-25 01:29:19.833032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-07-25 01:29:19.833041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-07-25 01:29:19.833526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-07-25 01:29:19.833536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-07-25 01:29:19.833941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-07-25 01:29:19.833951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-07-25 01:29:19.834426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-07-25 01:29:19.834437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-07-25 01:29:19.834834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-07-25 01:29:19.834844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-07-25 01:29:19.835254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-07-25 01:29:19.835264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-07-25 01:29:19.835718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-07-25 01:29:19.835728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-07-25 01:29:19.836183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-07-25 01:29:19.836194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-07-25 01:29:19.836647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-07-25 01:29:19.836658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-07-25 01:29:19.837056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-07-25 01:29:19.837066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-07-25 01:29:19.837520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-07-25 01:29:19.837530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-07-25 01:29:19.838004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-07-25 01:29:19.838015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-07-25 01:29:19.838471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-07-25 01:29:19.838482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-07-25 01:29:19.838818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-07-25 01:29:19.838828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-07-25 01:29:19.839159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-07-25 01:29:19.839169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-07-25 01:29:19.839663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-07-25 01:29:19.839673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-07-25 01:29:19.840151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-07-25 01:29:19.840161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-07-25 01:29:19.840562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-07-25 01:29:19.840572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-07-25 01:29:19.840892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-07-25 01:29:19.840902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-07-25 01:29:19.841298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-07-25 01:29:19.841308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-07-25 01:29:19.841709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-07-25 01:29:19.841719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-07-25 01:29:19.842196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-07-25 01:29:19.842206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-07-25 01:29:19.842611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-07-25 01:29:19.842621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-07-25 01:29:19.843060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-07-25 01:29:19.843071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-07-25 01:29:19.843522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-07-25 01:29:19.843531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-07-25 01:29:19.843921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-07-25 01:29:19.843931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-07-25 01:29:19.844386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-07-25 01:29:19.844396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-07-25 01:29:19.844819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-07-25 01:29:19.844829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-07-25 01:29:19.845251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-07-25 01:29:19.845261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-07-25 01:29:19.845661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-07-25 01:29:19.845671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-07-25 01:29:19.846124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-07-25 01:29:19.846135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-07-25 01:29:19.846435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-07-25 01:29:19.846445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-07-25 01:29:19.846870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-07-25 01:29:19.846880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-07-25 01:29:19.847280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-07-25 01:29:19.847290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-07-25 01:29:19.847772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-07-25 01:29:19.847782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-07-25 01:29:19.848215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-07-25 01:29:19.848225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-07-25 01:29:19.848612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-07-25 01:29:19.848622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-07-25 01:29:19.849073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-07-25 01:29:19.849083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-07-25 01:29:19.849347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-07-25 01:29:19.849357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-07-25 01:29:19.849836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-07-25 01:29:19.849846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-07-25 01:29:19.850185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-07-25 01:29:19.850195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-07-25 01:29:19.850609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-07-25 01:29:19.850619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-07-25 01:29:19.851038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-07-25 01:29:19.851052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-07-25 01:29:19.851526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-07-25 01:29:19.851536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-07-25 01:29:19.851924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-07-25 01:29:19.851934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-07-25 01:29:19.852365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-07-25 01:29:19.852375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-07-25 01:29:19.852715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-07-25 01:29:19.852724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-07-25 01:29:19.853112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-07-25 01:29:19.853122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-07-25 01:29:19.853325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-07-25 01:29:19.853335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-07-25 01:29:19.853759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-07-25 01:29:19.853769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-07-25 01:29:19.854174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-07-25 01:29:19.854185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-07-25 01:29:19.854588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-07-25 01:29:19.854601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-07-25 01:29:19.855077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-07-25 01:29:19.855087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-07-25 01:29:19.855493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-07-25 01:29:19.855503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-07-25 01:29:19.855857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-07-25 01:29:19.855867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-07-25 01:29:19.856266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-07-25 01:29:19.856276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-07-25 01:29:19.856665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-07-25 01:29:19.856675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-07-25 01:29:19.857069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-07-25 01:29:19.857079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-07-25 01:29:19.857528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-07-25 01:29:19.857538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-07-25 01:29:19.857937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-07-25 01:29:19.857947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-07-25 01:29:19.858091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-07-25 01:29:19.858101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-07-25 01:29:19.858524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-07-25 01:29:19.858534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-07-25 01:29:19.858953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-07-25 01:29:19.858963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.445 [2024-07-25 01:29:19.859441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-07-25 01:29:19.859452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-07-25 01:29:19.859908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-07-25 01:29:19.859918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-07-25 01:29:19.860196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-07-25 01:29:19.860206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-07-25 01:29:19.860678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-07-25 01:29:19.860688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-07-25 01:29:19.861126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-07-25 01:29:19.861135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-07-25 01:29:19.861566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-07-25 01:29:19.861575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-07-25 01:29:19.862051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-07-25 01:29:19.862061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-07-25 01:29:19.862534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-07-25 01:29:19.862544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-07-25 01:29:19.862942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-07-25 01:29:19.862951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-07-25 01:29:19.863114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-07-25 01:29:19.863125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-07-25 01:29:19.863600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-07-25 01:29:19.863609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-07-25 01:29:19.864011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-07-25 01:29:19.864021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-07-25 01:29:19.864366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-07-25 01:29:19.864376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-07-25 01:29:19.864731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-07-25 01:29:19.864741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-07-25 01:29:19.865217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-07-25 01:29:19.865227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-07-25 01:29:19.865657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-07-25 01:29:19.865667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-07-25 01:29:19.866090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-07-25 01:29:19.866100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-07-25 01:29:19.866517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-07-25 01:29:19.866526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-07-25 01:29:19.866866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-07-25 01:29:19.866876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-07-25 01:29:19.867277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-07-25 01:29:19.867287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-07-25 01:29:19.867712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-07-25 01:29:19.867722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-07-25 01:29:19.868136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-07-25 01:29:19.868146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-07-25 01:29:19.868625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-07-25 01:29:19.868634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-07-25 01:29:19.869026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-07-25 01:29:19.869037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-07-25 01:29:19.869375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-07-25 01:29:19.869386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-07-25 01:29:19.869816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-07-25 01:29:19.869826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-07-25 01:29:19.870282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-07-25 01:29:19.870292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-07-25 01:29:19.870767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-07-25 01:29:19.870777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-07-25 01:29:19.871182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-07-25 01:29:19.871195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-07-25 01:29:19.871553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-07-25 01:29:19.871563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-07-25 01:29:19.871981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-07-25 01:29:19.871991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-07-25 01:29:19.872392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-07-25 01:29:19.872402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-07-25 01:29:19.872859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-07-25 01:29:19.872869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-07-25 01:29:19.873262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-07-25 01:29:19.873273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-07-25 01:29:19.873618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-07-25 01:29:19.873628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-07-25 01:29:19.874100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-07-25 01:29:19.874110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-07-25 01:29:19.874539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-07-25 01:29:19.874550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-07-25 01:29:19.874908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-07-25 01:29:19.874918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.446 [2024-07-25 01:29:19.875342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-07-25 01:29:19.875352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-07-25 01:29:19.875749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-07-25 01:29:19.875758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-07-25 01:29:19.876166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-07-25 01:29:19.876177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-07-25 01:29:19.876320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-07-25 01:29:19.876329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-07-25 01:29:19.876732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-07-25 01:29:19.876742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-07-25 01:29:19.877195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-07-25 01:29:19.877205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-07-25 01:29:19.877659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-07-25 01:29:19.877669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-07-25 01:29:19.878071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-07-25 01:29:19.878081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-07-25 01:29:19.878229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-07-25 01:29:19.878239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-07-25 01:29:19.878442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-07-25 01:29:19.878452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-07-25 01:29:19.878802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-07-25 01:29:19.878812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-07-25 01:29:19.879147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-07-25 01:29:19.879158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-07-25 01:29:19.879581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-07-25 01:29:19.879591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-07-25 01:29:19.879978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-07-25 01:29:19.879989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-07-25 01:29:19.880389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-07-25 01:29:19.880399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-07-25 01:29:19.880804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-07-25 01:29:19.880814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-07-25 01:29:19.881216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-07-25 01:29:19.881226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-07-25 01:29:19.881636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-07-25 01:29:19.881646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-07-25 01:29:19.882099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-07-25 01:29:19.882109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-07-25 01:29:19.882352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-07-25 01:29:19.882362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-07-25 01:29:19.882838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-07-25 01:29:19.882848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-07-25 01:29:19.883328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-07-25 01:29:19.883338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-07-25 01:29:19.883685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-07-25 01:29:19.883694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-07-25 01:29:19.884146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-07-25 01:29:19.884157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-07-25 01:29:19.884577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-07-25 01:29:19.884587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-07-25 01:29:19.884975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-07-25 01:29:19.884985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-07-25 01:29:19.885437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-07-25 01:29:19.885447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-07-25 01:29:19.885786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-07-25 01:29:19.885795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-07-25 01:29:19.886267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-07-25 01:29:19.886276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-07-25 01:29:19.886707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-07-25 01:29:19.886717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-07-25 01:29:19.887118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-07-25 01:29:19.887130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-07-25 01:29:19.887527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-07-25 01:29:19.887537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-07-25 01:29:19.887922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-07-25 01:29:19.887932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-07-25 01:29:19.888385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-07-25 01:29:19.888395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-07-25 01:29:19.888870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-07-25 01:29:19.888880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-07-25 01:29:19.889334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-07-25 01:29:19.889344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-07-25 01:29:19.889819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-07-25 01:29:19.889829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-07-25 01:29:19.890283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-07-25 01:29:19.890293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-07-25 01:29:19.890684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-07-25 01:29:19.890693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-07-25 01:29:19.891165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-07-25 01:29:19.891175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-07-25 01:29:19.891656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-07-25 01:29:19.891666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-07-25 01:29:19.892006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-07-25 01:29:19.892016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-07-25 01:29:19.892493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-07-25 01:29:19.892503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-07-25 01:29:19.892967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-07-25 01:29:19.892977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-07-25 01:29:19.893434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-07-25 01:29:19.893444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-07-25 01:29:19.893872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-07-25 01:29:19.893882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-07-25 01:29:19.894296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-07-25 01:29:19.894306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-07-25 01:29:19.894695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-07-25 01:29:19.894705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-07-25 01:29:19.895131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-07-25 01:29:19.895142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-07-25 01:29:19.895477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-07-25 01:29:19.895487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-07-25 01:29:19.896001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-07-25 01:29:19.896010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-07-25 01:29:19.896464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-07-25 01:29:19.896475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-07-25 01:29:19.896825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-07-25 01:29:19.896835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-07-25 01:29:19.897286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-07-25 01:29:19.897296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-07-25 01:29:19.897781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-07-25 01:29:19.897791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-07-25 01:29:19.898218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-07-25 01:29:19.898229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-07-25 01:29:19.898681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-07-25 01:29:19.898690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-07-25 01:29:19.899082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-07-25 01:29:19.899092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-07-25 01:29:19.899437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-07-25 01:29:19.899447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-07-25 01:29:19.899863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-07-25 01:29:19.899873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-07-25 01:29:19.900217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-07-25 01:29:19.900227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-07-25 01:29:19.900635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-07-25 01:29:19.900645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-07-25 01:29:19.901036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-07-25 01:29:19.901048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-07-25 01:29:19.901507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-07-25 01:29:19.901517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-07-25 01:29:19.901871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-07-25 01:29:19.901881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-07-25 01:29:19.902336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-07-25 01:29:19.902346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-07-25 01:29:19.902790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-07-25 01:29:19.902800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-07-25 01:29:19.903207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-07-25 01:29:19.903218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-07-25 01:29:19.903353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-07-25 01:29:19.903363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-07-25 01:29:19.903764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-07-25 01:29:19.903774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-07-25 01:29:19.904252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-07-25 01:29:19.904264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-07-25 01:29:19.904620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-07-25 01:29:19.904631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.448 [2024-07-25 01:29:19.904978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.448 [2024-07-25 01:29:19.904989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.448 qpair failed and we were unable to recover it. 00:28:57.448 [2024-07-25 01:29:19.905394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.448 [2024-07-25 01:29:19.905405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.448 qpair failed and we were unable to recover it. 00:28:57.448 [2024-07-25 01:29:19.905740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.448 [2024-07-25 01:29:19.905750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.448 qpair failed and we were unable to recover it. 00:28:57.448 [2024-07-25 01:29:19.906205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.448 [2024-07-25 01:29:19.906215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.448 qpair failed and we were unable to recover it. 00:28:57.448 [2024-07-25 01:29:19.906629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.448 [2024-07-25 01:29:19.906638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.448 qpair failed and we were unable to recover it. 00:28:57.448 [2024-07-25 01:29:19.907051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.448 [2024-07-25 01:29:19.907061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.448 qpair failed and we were unable to recover it. 00:28:57.448 [2024-07-25 01:29:19.907443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.448 [2024-07-25 01:29:19.907453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.448 qpair failed and we were unable to recover it. 00:28:57.715 [2024-07-25 01:29:19.907934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.716 [2024-07-25 01:29:19.907945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.716 qpair failed and we were unable to recover it. 00:28:57.716 [2024-07-25 01:29:19.908282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.716 [2024-07-25 01:29:19.908296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.716 qpair failed and we were unable to recover it. 00:28:57.716 [2024-07-25 01:29:19.908763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.716 [2024-07-25 01:29:19.908772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.716 qpair failed and we were unable to recover it. 00:28:57.716 [2024-07-25 01:29:19.909097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.716 [2024-07-25 01:29:19.909107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.716 qpair failed and we were unable to recover it. 00:28:57.716 [2024-07-25 01:29:19.909584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.716 [2024-07-25 01:29:19.909595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.716 qpair failed and we were unable to recover it. 00:28:57.716 [2024-07-25 01:29:19.910024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.716 [2024-07-25 01:29:19.910034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.716 qpair failed and we were unable to recover it. 00:28:57.716 [2024-07-25 01:29:19.910379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.716 [2024-07-25 01:29:19.910390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.716 qpair failed and we were unable to recover it. 00:28:57.716 [2024-07-25 01:29:19.910793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.716 [2024-07-25 01:29:19.910804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.716 qpair failed and we were unable to recover it. 00:28:57.716 [2024-07-25 01:29:19.911199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.716 [2024-07-25 01:29:19.911209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.716 qpair failed and we were unable to recover it. 00:28:57.716 [2024-07-25 01:29:19.911551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.716 [2024-07-25 01:29:19.911561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.716 qpair failed and we were unable to recover it. 00:28:57.716 [2024-07-25 01:29:19.911772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.716 [2024-07-25 01:29:19.911783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.716 qpair failed and we were unable to recover it. 00:28:57.716 [2024-07-25 01:29:19.912254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.716 [2024-07-25 01:29:19.912265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.716 qpair failed and we were unable to recover it. 00:28:57.716 [2024-07-25 01:29:19.912742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.716 [2024-07-25 01:29:19.912752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.716 qpair failed and we were unable to recover it. 00:28:57.716 [2024-07-25 01:29:19.913157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.716 [2024-07-25 01:29:19.913167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.716 qpair failed and we were unable to recover it. 00:28:57.716 [2024-07-25 01:29:19.913500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.716 [2024-07-25 01:29:19.913510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.716 qpair failed and we were unable to recover it. 00:28:57.716 [2024-07-25 01:29:19.913936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.716 [2024-07-25 01:29:19.913946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.716 qpair failed and we were unable to recover it. 00:28:57.716 [2024-07-25 01:29:19.914343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.716 [2024-07-25 01:29:19.914355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.716 qpair failed and we were unable to recover it. 00:28:57.716 [2024-07-25 01:29:19.914561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.716 [2024-07-25 01:29:19.914571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.716 qpair failed and we were unable to recover it. 00:28:57.716 [2024-07-25 01:29:19.914987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.716 [2024-07-25 01:29:19.914998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.716 qpair failed and we were unable to recover it. 00:28:57.716 [2024-07-25 01:29:19.915476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.716 [2024-07-25 01:29:19.915486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.716 qpair failed and we were unable to recover it. 00:28:57.716 [2024-07-25 01:29:19.915629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.716 [2024-07-25 01:29:19.915639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.716 qpair failed and we were unable to recover it. 00:28:57.716 [2024-07-25 01:29:19.915989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.716 [2024-07-25 01:29:19.915999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.716 qpair failed and we were unable to recover it. 00:28:57.716 [2024-07-25 01:29:19.916289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.716 [2024-07-25 01:29:19.916300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.716 qpair failed and we were unable to recover it. 00:28:57.716 [2024-07-25 01:29:19.916656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.716 [2024-07-25 01:29:19.916666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.716 qpair failed and we were unable to recover it. 00:28:57.716 [2024-07-25 01:29:19.917056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.716 [2024-07-25 01:29:19.917066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.716 qpair failed and we were unable to recover it. 00:28:57.716 [2024-07-25 01:29:19.917519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.716 [2024-07-25 01:29:19.917529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.716 qpair failed and we were unable to recover it. 00:28:57.716 [2024-07-25 01:29:19.917919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.716 [2024-07-25 01:29:19.917929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.716 qpair failed and we were unable to recover it. 00:28:57.716 [2024-07-25 01:29:19.918345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.716 [2024-07-25 01:29:19.918355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.716 qpair failed and we were unable to recover it. 00:28:57.716 [2024-07-25 01:29:19.918772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.716 [2024-07-25 01:29:19.918782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.716 qpair failed and we were unable to recover it. 00:28:57.716 [2024-07-25 01:29:19.919180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.716 [2024-07-25 01:29:19.919191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.716 qpair failed and we were unable to recover it. 00:28:57.716 [2024-07-25 01:29:19.919588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.716 [2024-07-25 01:29:19.919598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.716 qpair failed and we were unable to recover it. 00:28:57.716 [2024-07-25 01:29:19.919799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.716 [2024-07-25 01:29:19.919811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.716 qpair failed and we were unable to recover it. 00:28:57.716 [2024-07-25 01:29:19.920217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.716 [2024-07-25 01:29:19.920228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.716 qpair failed and we were unable to recover it. 00:28:57.716 [2024-07-25 01:29:19.920563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.716 [2024-07-25 01:29:19.920573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.716 qpair failed and we were unable to recover it. 00:28:57.716 [2024-07-25 01:29:19.920960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.716 [2024-07-25 01:29:19.920970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.716 qpair failed and we were unable to recover it. 00:28:57.716 [2024-07-25 01:29:19.921488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.716 [2024-07-25 01:29:19.921498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.716 qpair failed and we were unable to recover it. 00:28:57.716 [2024-07-25 01:29:19.921885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.716 [2024-07-25 01:29:19.921896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.716 qpair failed and we were unable to recover it. 00:28:57.716 [2024-07-25 01:29:19.922291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.717 [2024-07-25 01:29:19.922301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.717 qpair failed and we were unable to recover it. 00:28:57.717 [2024-07-25 01:29:19.922688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.717 [2024-07-25 01:29:19.922697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.717 qpair failed and we were unable to recover it. 00:28:57.717 [2024-07-25 01:29:19.923096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.717 [2024-07-25 01:29:19.923107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.717 qpair failed and we were unable to recover it. 00:28:57.717 [2024-07-25 01:29:19.923521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.717 [2024-07-25 01:29:19.923531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.717 qpair failed and we were unable to recover it. 00:28:57.717 [2024-07-25 01:29:19.923985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.717 [2024-07-25 01:29:19.923996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.717 qpair failed and we were unable to recover it. 00:28:57.717 [2024-07-25 01:29:19.924399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.717 [2024-07-25 01:29:19.924409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.717 qpair failed and we were unable to recover it. 00:28:57.717 [2024-07-25 01:29:19.924736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.717 [2024-07-25 01:29:19.924746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.717 qpair failed and we were unable to recover it. 00:28:57.717 [2024-07-25 01:29:19.925159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.717 [2024-07-25 01:29:19.925169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.717 qpair failed and we were unable to recover it. 00:28:57.717 [2024-07-25 01:29:19.925556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.717 [2024-07-25 01:29:19.925566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.717 qpair failed and we were unable to recover it. 00:28:57.717 [2024-07-25 01:29:19.926020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.717 [2024-07-25 01:29:19.926031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.717 qpair failed and we were unable to recover it. 00:28:57.717 [2024-07-25 01:29:19.926517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.717 [2024-07-25 01:29:19.926528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.717 qpair failed and we were unable to recover it. 00:28:57.717 [2024-07-25 01:29:19.926967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.717 [2024-07-25 01:29:19.926977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.717 qpair failed and we were unable to recover it. 00:28:57.717 [2024-07-25 01:29:19.927334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.717 [2024-07-25 01:29:19.927344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.717 qpair failed and we were unable to recover it. 00:28:57.717 [2024-07-25 01:29:19.927734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.717 [2024-07-25 01:29:19.927744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.717 qpair failed and we were unable to recover it. 00:28:57.717 [2024-07-25 01:29:19.928222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.717 [2024-07-25 01:29:19.928233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.717 qpair failed and we were unable to recover it. 00:28:57.717 [2024-07-25 01:29:19.928553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.717 [2024-07-25 01:29:19.928563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.717 qpair failed and we were unable to recover it. 00:28:57.717 [2024-07-25 01:29:19.928964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.717 [2024-07-25 01:29:19.928974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.717 qpair failed and we were unable to recover it. 00:28:57.717 [2024-07-25 01:29:19.929121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.717 [2024-07-25 01:29:19.929132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.717 qpair failed and we were unable to recover it. 00:28:57.717 [2024-07-25 01:29:19.929520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.717 [2024-07-25 01:29:19.929531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.717 qpair failed and we were unable to recover it. 00:28:57.717 [2024-07-25 01:29:19.930004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.717 [2024-07-25 01:29:19.930014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.717 qpair failed and we were unable to recover it. 00:28:57.717 [2024-07-25 01:29:19.930366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.717 [2024-07-25 01:29:19.930376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.717 qpair failed and we were unable to recover it. 00:28:57.717 [2024-07-25 01:29:19.930830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.717 [2024-07-25 01:29:19.930841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.717 qpair failed and we were unable to recover it. 00:28:57.717 [2024-07-25 01:29:19.931244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.717 [2024-07-25 01:29:19.931255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.717 qpair failed and we were unable to recover it. 00:28:57.717 [2024-07-25 01:29:19.931709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.717 [2024-07-25 01:29:19.931719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.717 qpair failed and we were unable to recover it. 00:28:57.717 [2024-07-25 01:29:19.932111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.717 [2024-07-25 01:29:19.932122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.717 qpair failed and we were unable to recover it. 00:28:57.717 [2024-07-25 01:29:19.932620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.717 [2024-07-25 01:29:19.932630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.717 qpair failed and we were unable to recover it. 00:28:57.717 [2024-07-25 01:29:19.933103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.717 [2024-07-25 01:29:19.933114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.717 qpair failed and we were unable to recover it. 00:28:57.717 [2024-07-25 01:29:19.933392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.717 [2024-07-25 01:29:19.933402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.717 qpair failed and we were unable to recover it. 00:28:57.717 [2024-07-25 01:29:19.933832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.717 [2024-07-25 01:29:19.933842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.717 qpair failed and we were unable to recover it. 00:28:57.717 [2024-07-25 01:29:19.934252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.717 [2024-07-25 01:29:19.934263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.717 qpair failed and we were unable to recover it. 00:28:57.717 [2024-07-25 01:29:19.934749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.717 [2024-07-25 01:29:19.934760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.717 qpair failed and we were unable to recover it. 00:28:57.717 [2024-07-25 01:29:19.935097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.717 [2024-07-25 01:29:19.935108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.717 qpair failed and we were unable to recover it. 00:28:57.717 [2024-07-25 01:29:19.935395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.717 [2024-07-25 01:29:19.935405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.717 qpair failed and we were unable to recover it. 00:28:57.717 [2024-07-25 01:29:19.935806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.717 [2024-07-25 01:29:19.935816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.717 qpair failed and we were unable to recover it. 00:28:57.717 [2024-07-25 01:29:19.936150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.717 [2024-07-25 01:29:19.936163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.717 qpair failed and we were unable to recover it. 00:28:57.717 [2024-07-25 01:29:19.936572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.717 [2024-07-25 01:29:19.936582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.717 qpair failed and we were unable to recover it. 00:28:57.717 [2024-07-25 01:29:19.936990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.717 [2024-07-25 01:29:19.937000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.717 qpair failed and we were unable to recover it. 00:28:57.717 [2024-07-25 01:29:19.937404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.717 [2024-07-25 01:29:19.937415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.717 qpair failed and we were unable to recover it. 00:28:57.718 [2024-07-25 01:29:19.937820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.718 [2024-07-25 01:29:19.937830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.718 qpair failed and we were unable to recover it. 00:28:57.718 [2024-07-25 01:29:19.938285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.718 [2024-07-25 01:29:19.938296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.718 qpair failed and we were unable to recover it. 00:28:57.718 [2024-07-25 01:29:19.938769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.718 [2024-07-25 01:29:19.938779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.718 qpair failed and we were unable to recover it. 00:28:57.718 [2024-07-25 01:29:19.939166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.718 [2024-07-25 01:29:19.939177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.718 qpair failed and we were unable to recover it. 00:28:57.718 [2024-07-25 01:29:19.939572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.718 [2024-07-25 01:29:19.939583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.718 qpair failed and we were unable to recover it. 00:28:57.718 [2024-07-25 01:29:19.940061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.718 [2024-07-25 01:29:19.940072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.718 qpair failed and we were unable to recover it. 00:28:57.718 [2024-07-25 01:29:19.940525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.718 [2024-07-25 01:29:19.940535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.718 qpair failed and we were unable to recover it. 00:28:57.718 [2024-07-25 01:29:19.941008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.718 [2024-07-25 01:29:19.941018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.718 qpair failed and we were unable to recover it. 00:28:57.718 [2024-07-25 01:29:19.941409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.718 [2024-07-25 01:29:19.941420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.718 qpair failed and we were unable to recover it. 00:28:57.718 [2024-07-25 01:29:19.941839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.718 [2024-07-25 01:29:19.941849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.718 qpair failed and we were unable to recover it. 00:28:57.718 [2024-07-25 01:29:19.942213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.718 [2024-07-25 01:29:19.942224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.718 qpair failed and we were unable to recover it. 00:28:57.718 [2024-07-25 01:29:19.942613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.718 [2024-07-25 01:29:19.942623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.718 qpair failed and we were unable to recover it. 00:28:57.718 [2024-07-25 01:29:19.943063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.718 [2024-07-25 01:29:19.943074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.718 qpair failed and we were unable to recover it. 00:28:57.718 [2024-07-25 01:29:19.943531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.718 [2024-07-25 01:29:19.943542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.718 qpair failed and we were unable to recover it. 00:28:57.718 [2024-07-25 01:29:19.943956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.718 [2024-07-25 01:29:19.943966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.718 qpair failed and we were unable to recover it. 00:28:57.718 [2024-07-25 01:29:19.944356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.718 [2024-07-25 01:29:19.944367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.718 qpair failed and we were unable to recover it. 00:28:57.718 [2024-07-25 01:29:19.944579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.718 [2024-07-25 01:29:19.944589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.718 qpair failed and we were unable to recover it. 00:28:57.718 [2024-07-25 01:29:19.944996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.718 [2024-07-25 01:29:19.945006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.718 qpair failed and we were unable to recover it. 00:28:57.718 [2024-07-25 01:29:19.945412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.718 [2024-07-25 01:29:19.945422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.718 qpair failed and we were unable to recover it. 00:28:57.718 [2024-07-25 01:29:19.945894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.718 [2024-07-25 01:29:19.945904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.718 qpair failed and we were unable to recover it. 00:28:57.718 [2024-07-25 01:29:19.946240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.718 [2024-07-25 01:29:19.946251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.718 qpair failed and we were unable to recover it. 00:28:57.718 [2024-07-25 01:29:19.946746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.718 [2024-07-25 01:29:19.946756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.718 qpair failed and we were unable to recover it. 00:28:57.718 [2024-07-25 01:29:19.947051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.718 [2024-07-25 01:29:19.947061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.718 qpair failed and we were unable to recover it. 00:28:57.718 [2024-07-25 01:29:19.947467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.718 [2024-07-25 01:29:19.947478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.718 qpair failed and we were unable to recover it. 00:28:57.718 [2024-07-25 01:29:19.947828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.718 [2024-07-25 01:29:19.947839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.718 qpair failed and we were unable to recover it. 00:28:57.718 [2024-07-25 01:29:19.948244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.718 [2024-07-25 01:29:19.948254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.718 qpair failed and we were unable to recover it. 00:28:57.718 [2024-07-25 01:29:19.948675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.718 [2024-07-25 01:29:19.948685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.718 qpair failed and we were unable to recover it. 00:28:57.718 [2024-07-25 01:29:19.949090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.718 [2024-07-25 01:29:19.949101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.718 qpair failed and we were unable to recover it. 00:28:57.718 [2024-07-25 01:29:19.949504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.718 [2024-07-25 01:29:19.949514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.718 qpair failed and we were unable to recover it. 00:28:57.718 [2024-07-25 01:29:19.949848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.718 [2024-07-25 01:29:19.949859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.718 qpair failed and we were unable to recover it. 00:28:57.718 [2024-07-25 01:29:19.950337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.718 [2024-07-25 01:29:19.950348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.718 qpair failed and we were unable to recover it. 00:28:57.718 [2024-07-25 01:29:19.950759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.718 [2024-07-25 01:29:19.950769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.718 qpair failed and we were unable to recover it. 00:28:57.718 [2024-07-25 01:29:19.951170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.718 [2024-07-25 01:29:19.951181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.718 qpair failed and we were unable to recover it. 00:28:57.718 [2024-07-25 01:29:19.951598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.718 [2024-07-25 01:29:19.951608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.718 qpair failed and we were unable to recover it. 00:28:57.718 [2024-07-25 01:29:19.952002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.718 [2024-07-25 01:29:19.952012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.718 qpair failed and we were unable to recover it. 00:28:57.718 [2024-07-25 01:29:19.952434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.718 [2024-07-25 01:29:19.952445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.718 qpair failed and we were unable to recover it. 00:28:57.718 [2024-07-25 01:29:19.952732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.718 [2024-07-25 01:29:19.952744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.718 qpair failed and we were unable to recover it. 00:28:57.719 [2024-07-25 01:29:19.953195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.719 [2024-07-25 01:29:19.953206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.719 qpair failed and we were unable to recover it. 00:28:57.719 [2024-07-25 01:29:19.953633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.719 [2024-07-25 01:29:19.953643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.719 qpair failed and we were unable to recover it. 00:28:57.719 [2024-07-25 01:29:19.953982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.719 [2024-07-25 01:29:19.953992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.719 qpair failed and we were unable to recover it. 00:28:57.719 [2024-07-25 01:29:19.954396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.719 [2024-07-25 01:29:19.954407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.719 qpair failed and we were unable to recover it. 00:28:57.719 [2024-07-25 01:29:19.954859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.719 [2024-07-25 01:29:19.954869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.719 qpair failed and we were unable to recover it. 00:28:57.719 [2024-07-25 01:29:19.955273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.719 [2024-07-25 01:29:19.955283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.719 qpair failed and we were unable to recover it. 00:28:57.719 [2024-07-25 01:29:19.955757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.719 [2024-07-25 01:29:19.955768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.719 qpair failed and we were unable to recover it. 00:28:57.719 [2024-07-25 01:29:19.956242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.719 [2024-07-25 01:29:19.956253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.719 qpair failed and we were unable to recover it. 00:28:57.719 [2024-07-25 01:29:19.956731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.719 [2024-07-25 01:29:19.956741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.719 qpair failed and we were unable to recover it. 00:28:57.719 [2024-07-25 01:29:19.957143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.719 [2024-07-25 01:29:19.957153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.719 qpair failed and we were unable to recover it. 00:28:57.719 [2024-07-25 01:29:19.957569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.719 [2024-07-25 01:29:19.957579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.719 qpair failed and we were unable to recover it. 00:28:57.719 [2024-07-25 01:29:19.957899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.719 [2024-07-25 01:29:19.957911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.719 qpair failed and we were unable to recover it. 00:28:57.719 [2024-07-25 01:29:19.958388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.719 [2024-07-25 01:29:19.958399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.719 qpair failed and we were unable to recover it. 00:28:57.719 [2024-07-25 01:29:19.958866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.719 [2024-07-25 01:29:19.958876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.719 qpair failed and we were unable to recover it. 00:28:57.719 [2024-07-25 01:29:19.959273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.719 [2024-07-25 01:29:19.959283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.719 qpair failed and we were unable to recover it. 00:28:57.719 [2024-07-25 01:29:19.959682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.719 [2024-07-25 01:29:19.959692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.719 qpair failed and we were unable to recover it. 00:28:57.719 [2024-07-25 01:29:19.960013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.719 [2024-07-25 01:29:19.960023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.719 qpair failed and we were unable to recover it. 00:28:57.719 [2024-07-25 01:29:19.960477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.719 [2024-07-25 01:29:19.960487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.719 qpair failed and we were unable to recover it. 00:28:57.719 [2024-07-25 01:29:19.960890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.719 [2024-07-25 01:29:19.960900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.719 qpair failed and we were unable to recover it. 00:28:57.719 [2024-07-25 01:29:19.961261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.719 [2024-07-25 01:29:19.961272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.719 qpair failed and we were unable to recover it. 00:28:57.719 [2024-07-25 01:29:19.961692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.719 [2024-07-25 01:29:19.961702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.719 qpair failed and we were unable to recover it. 00:28:57.719 [2024-07-25 01:29:19.962174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.719 [2024-07-25 01:29:19.962185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.719 qpair failed and we were unable to recover it. 00:28:57.719 [2024-07-25 01:29:19.962586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.719 [2024-07-25 01:29:19.962596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.719 qpair failed and we were unable to recover it. 00:28:57.719 [2024-07-25 01:29:19.962992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.719 [2024-07-25 01:29:19.963002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.719 qpair failed and we were unable to recover it. 00:28:57.719 [2024-07-25 01:29:19.963417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.719 [2024-07-25 01:29:19.963427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.719 qpair failed and we were unable to recover it. 00:28:57.719 [2024-07-25 01:29:19.963878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.719 [2024-07-25 01:29:19.963888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.719 qpair failed and we were unable to recover it. 00:28:57.719 [2024-07-25 01:29:19.964362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.719 [2024-07-25 01:29:19.964374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.719 qpair failed and we were unable to recover it. 00:28:57.719 [2024-07-25 01:29:19.964760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.719 [2024-07-25 01:29:19.964770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.719 qpair failed and we were unable to recover it. 00:28:57.719 [2024-07-25 01:29:19.965167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.719 [2024-07-25 01:29:19.965177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.719 qpair failed and we were unable to recover it. 00:28:57.719 [2024-07-25 01:29:19.965598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.719 [2024-07-25 01:29:19.965608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.719 qpair failed and we were unable to recover it. 00:28:57.719 [2024-07-25 01:29:19.966004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.719 [2024-07-25 01:29:19.966014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.719 qpair failed and we were unable to recover it. 00:28:57.719 [2024-07-25 01:29:19.966494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.719 [2024-07-25 01:29:19.966506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.719 qpair failed and we were unable to recover it. 00:28:57.719 [2024-07-25 01:29:19.966983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.719 [2024-07-25 01:29:19.966994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.719 qpair failed and we were unable to recover it. 00:28:57.719 [2024-07-25 01:29:19.967471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.719 [2024-07-25 01:29:19.967481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.720 qpair failed and we were unable to recover it. 00:28:57.720 [2024-07-25 01:29:19.967820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.720 [2024-07-25 01:29:19.967831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.720 qpair failed and we were unable to recover it. 00:28:57.720 [2024-07-25 01:29:19.968307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.720 [2024-07-25 01:29:19.968317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.720 qpair failed and we were unable to recover it. 00:28:57.720 [2024-07-25 01:29:19.968519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.720 [2024-07-25 01:29:19.968529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.720 qpair failed and we were unable to recover it. 00:28:57.720 [2024-07-25 01:29:19.968978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.720 [2024-07-25 01:29:19.968988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.720 qpair failed and we were unable to recover it. 00:28:57.720 [2024-07-25 01:29:19.969459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.720 [2024-07-25 01:29:19.969469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.720 qpair failed and we were unable to recover it. 00:28:57.720 [2024-07-25 01:29:19.969755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.720 [2024-07-25 01:29:19.969768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.720 qpair failed and we were unable to recover it. 00:28:57.720 [2024-07-25 01:29:19.970162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.720 [2024-07-25 01:29:19.970172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.720 qpair failed and we were unable to recover it. 00:28:57.720 [2024-07-25 01:29:19.970624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.720 [2024-07-25 01:29:19.970634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.720 qpair failed and we were unable to recover it. 00:28:57.720 [2024-07-25 01:29:19.971002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.720 [2024-07-25 01:29:19.971011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.720 qpair failed and we were unable to recover it. 00:28:57.720 [2024-07-25 01:29:19.971484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.720 [2024-07-25 01:29:19.971494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.720 qpair failed and we were unable to recover it. 00:28:57.720 [2024-07-25 01:29:19.971973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.720 [2024-07-25 01:29:19.971983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.720 qpair failed and we were unable to recover it. 00:28:57.720 [2024-07-25 01:29:19.972389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.720 [2024-07-25 01:29:19.972399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.720 qpair failed and we were unable to recover it. 00:28:57.720 [2024-07-25 01:29:19.972836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.720 [2024-07-25 01:29:19.972846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.720 qpair failed and we were unable to recover it. 00:28:57.720 [2024-07-25 01:29:19.973321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.720 [2024-07-25 01:29:19.973331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.720 qpair failed and we were unable to recover it. 00:28:57.720 [2024-07-25 01:29:19.973783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.720 [2024-07-25 01:29:19.973792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.720 qpair failed and we were unable to recover it. 00:28:57.720 [2024-07-25 01:29:19.974268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.720 [2024-07-25 01:29:19.974278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.720 qpair failed and we were unable to recover it. 00:28:57.720 [2024-07-25 01:29:19.974624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.720 [2024-07-25 01:29:19.974634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.720 qpair failed and we were unable to recover it. 00:28:57.720 [2024-07-25 01:29:19.974898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.720 [2024-07-25 01:29:19.974908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.720 qpair failed and we were unable to recover it. 00:28:57.720 [2024-07-25 01:29:19.975388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.720 [2024-07-25 01:29:19.975398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.720 qpair failed and we were unable to recover it. 00:28:57.720 [2024-07-25 01:29:19.975796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.720 [2024-07-25 01:29:19.975807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.720 qpair failed and we were unable to recover it. 00:28:57.720 [2024-07-25 01:29:19.976210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.720 [2024-07-25 01:29:19.976220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.720 qpair failed and we were unable to recover it. 00:28:57.720 [2024-07-25 01:29:19.976677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.720 [2024-07-25 01:29:19.976687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.720 qpair failed and we were unable to recover it. 00:28:57.720 [2024-07-25 01:29:19.976970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.720 [2024-07-25 01:29:19.976980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.720 qpair failed and we were unable to recover it. 00:28:57.720 [2024-07-25 01:29:19.977411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.720 [2024-07-25 01:29:19.977421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.720 qpair failed and we were unable to recover it. 00:28:57.720 [2024-07-25 01:29:19.977834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.720 [2024-07-25 01:29:19.977844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.720 qpair failed and we were unable to recover it. 00:28:57.720 [2024-07-25 01:29:19.978247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.720 [2024-07-25 01:29:19.978257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.720 qpair failed and we were unable to recover it. 00:28:57.720 [2024-07-25 01:29:19.978734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.720 [2024-07-25 01:29:19.978743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.720 qpair failed and we were unable to recover it. 00:28:57.720 [2024-07-25 01:29:19.979074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.721 [2024-07-25 01:29:19.979084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.721 qpair failed and we were unable to recover it. 00:28:57.721 [2024-07-25 01:29:19.979425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.721 [2024-07-25 01:29:19.979434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.721 qpair failed and we were unable to recover it. 00:28:57.721 [2024-07-25 01:29:19.979850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.721 [2024-07-25 01:29:19.979859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.721 qpair failed and we were unable to recover it. 00:28:57.721 [2024-07-25 01:29:19.980370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.721 [2024-07-25 01:29:19.980380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.721 qpair failed and we were unable to recover it. 00:28:57.721 [2024-07-25 01:29:19.980853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.721 [2024-07-25 01:29:19.980863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.721 qpair failed and we were unable to recover it. 00:28:57.721 [2024-07-25 01:29:19.981277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.721 [2024-07-25 01:29:19.981287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.721 qpair failed and we were unable to recover it. 00:28:57.721 [2024-07-25 01:29:19.981742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.721 [2024-07-25 01:29:19.981752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.721 qpair failed and we were unable to recover it. 00:28:57.721 [2024-07-25 01:29:19.982203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.721 [2024-07-25 01:29:19.982213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.721 qpair failed and we were unable to recover it. 00:28:57.721 [2024-07-25 01:29:19.982691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.721 [2024-07-25 01:29:19.982701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.721 qpair failed and we were unable to recover it. 00:28:57.721 [2024-07-25 01:29:19.983154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.721 [2024-07-25 01:29:19.983164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.721 qpair failed and we were unable to recover it. 00:28:57.721 [2024-07-25 01:29:19.983569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.721 [2024-07-25 01:29:19.983579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.721 qpair failed and we were unable to recover it. 00:28:57.721 [2024-07-25 01:29:19.983934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.721 [2024-07-25 01:29:19.983943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.721 qpair failed and we were unable to recover it. 00:28:57.721 [2024-07-25 01:29:19.984357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.721 [2024-07-25 01:29:19.984367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.721 qpair failed and we were unable to recover it. 00:28:57.721 [2024-07-25 01:29:19.984773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.721 [2024-07-25 01:29:19.984783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.721 qpair failed and we were unable to recover it. 00:28:57.721 [2024-07-25 01:29:19.985256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.721 [2024-07-25 01:29:19.985266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.721 qpair failed and we were unable to recover it. 00:28:57.721 [2024-07-25 01:29:19.985746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.721 [2024-07-25 01:29:19.985756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.721 qpair failed and we were unable to recover it. 00:28:57.721 [2024-07-25 01:29:19.986177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.721 [2024-07-25 01:29:19.986187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.721 qpair failed and we were unable to recover it. 00:28:57.721 [2024-07-25 01:29:19.986391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.721 [2024-07-25 01:29:19.986401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.721 qpair failed and we were unable to recover it. 00:28:57.721 [2024-07-25 01:29:19.986756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.721 [2024-07-25 01:29:19.986768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.721 qpair failed and we were unable to recover it. 00:28:57.721 [2024-07-25 01:29:19.987221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.721 [2024-07-25 01:29:19.987231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.721 qpair failed and we were unable to recover it. 00:28:57.721 [2024-07-25 01:29:19.987581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.721 [2024-07-25 01:29:19.987591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.721 qpair failed and we were unable to recover it. 00:28:57.721 [2024-07-25 01:29:19.987990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.721 [2024-07-25 01:29:19.988000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.721 qpair failed and we were unable to recover it. 00:28:57.721 [2024-07-25 01:29:19.988454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.721 [2024-07-25 01:29:19.988464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.721 qpair failed and we were unable to recover it. 00:28:57.721 [2024-07-25 01:29:19.988722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.721 [2024-07-25 01:29:19.988732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.721 qpair failed and we were unable to recover it. 00:28:57.721 [2024-07-25 01:29:19.989211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.721 [2024-07-25 01:29:19.989221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.721 qpair failed and we were unable to recover it. 00:28:57.721 [2024-07-25 01:29:19.989442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.721 [2024-07-25 01:29:19.989451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.721 qpair failed and we were unable to recover it. 00:28:57.721 [2024-07-25 01:29:19.989869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.721 [2024-07-25 01:29:19.989879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.721 qpair failed and we were unable to recover it. 00:28:57.721 [2024-07-25 01:29:19.990348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.721 [2024-07-25 01:29:19.990358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.721 qpair failed and we were unable to recover it. 00:28:57.721 [2024-07-25 01:29:19.990762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.721 [2024-07-25 01:29:19.990771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.721 qpair failed and we were unable to recover it. 00:28:57.721 [2024-07-25 01:29:19.991044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.721 [2024-07-25 01:29:19.991054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.721 qpair failed and we were unable to recover it. 00:28:57.721 [2024-07-25 01:29:19.991511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.721 [2024-07-25 01:29:19.991521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.721 qpair failed and we were unable to recover it. 00:28:57.721 [2024-07-25 01:29:19.992021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.721 [2024-07-25 01:29:19.992030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.721 qpair failed and we were unable to recover it. 00:28:57.721 [2024-07-25 01:29:19.992444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.721 [2024-07-25 01:29:19.992455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.721 qpair failed and we were unable to recover it. 00:28:57.721 [2024-07-25 01:29:19.992894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.721 [2024-07-25 01:29:19.992904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.721 qpair failed and we were unable to recover it. 00:28:57.721 [2024-07-25 01:29:19.993389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.721 [2024-07-25 01:29:19.993400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.721 qpair failed and we were unable to recover it. 00:28:57.721 [2024-07-25 01:29:19.993878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.722 [2024-07-25 01:29:19.993888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.722 qpair failed and we were unable to recover it. 00:28:57.722 [2024-07-25 01:29:19.994245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.722 [2024-07-25 01:29:19.994255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.722 qpair failed and we were unable to recover it. 00:28:57.722 [2024-07-25 01:29:19.994642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.722 [2024-07-25 01:29:19.994651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.722 qpair failed and we were unable to recover it. 00:28:57.722 [2024-07-25 01:29:19.995040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.722 [2024-07-25 01:29:19.995053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.722 qpair failed and we were unable to recover it. 00:28:57.722 [2024-07-25 01:29:19.995402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.722 [2024-07-25 01:29:19.995412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.722 qpair failed and we were unable to recover it. 00:28:57.722 [2024-07-25 01:29:19.995829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.722 [2024-07-25 01:29:19.995839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.722 qpair failed and we were unable to recover it. 00:28:57.722 [2024-07-25 01:29:19.996226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.722 [2024-07-25 01:29:19.996236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.722 qpair failed and we were unable to recover it. 00:28:57.722 [2024-07-25 01:29:19.996590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.722 [2024-07-25 01:29:19.996600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.722 qpair failed and we were unable to recover it. 00:28:57.722 [2024-07-25 01:29:19.996999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.722 [2024-07-25 01:29:19.997009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.722 qpair failed and we were unable to recover it. 00:28:57.722 [2024-07-25 01:29:19.997403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.722 [2024-07-25 01:29:19.997414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.722 qpair failed and we were unable to recover it. 00:28:57.722 [2024-07-25 01:29:19.997895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.722 [2024-07-25 01:29:19.997905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.722 qpair failed and we were unable to recover it. 00:28:57.722 01:29:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:57.722 [2024-07-25 01:29:19.998324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.722 [2024-07-25 01:29:19.998336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.722 qpair failed and we were unable to recover it. 00:28:57.722 01:29:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:28:57.722 [2024-07-25 01:29:19.998738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.722 [2024-07-25 01:29:19.998749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.722 qpair failed and we were unable to recover it. 00:28:57.722 01:29:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:57.722 01:29:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:57.722 [2024-07-25 01:29:19.999153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.722 [2024-07-25 01:29:19.999165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.722 qpair failed and we were unable to recover it. 00:28:57.722 01:29:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:57.722 [2024-07-25 01:29:19.999567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.722 [2024-07-25 01:29:19.999580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.722 qpair failed and we were unable to recover it. 00:28:57.722 [2024-07-25 01:29:20.000035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.722 [2024-07-25 01:29:20.000049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.722 qpair failed and we were unable to recover it. 00:28:57.722 [2024-07-25 01:29:20.000549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.722 [2024-07-25 01:29:20.000560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.722 qpair failed and we were unable to recover it. 00:28:57.722 [2024-07-25 01:29:20.000965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.722 [2024-07-25 01:29:20.000976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.722 qpair failed and we were unable to recover it. 00:28:57.722 [2024-07-25 01:29:20.001427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.722 [2024-07-25 01:29:20.001438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.722 qpair failed and we were unable to recover it. 00:28:57.722 [2024-07-25 01:29:20.001914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.722 [2024-07-25 01:29:20.001925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.722 qpair failed and we were unable to recover it. 00:28:57.722 [2024-07-25 01:29:20.002338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.722 [2024-07-25 01:29:20.002349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.722 qpair failed and we were unable to recover it. 00:28:57.722 [2024-07-25 01:29:20.002787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.722 [2024-07-25 01:29:20.002801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.722 qpair failed and we were unable to recover it. 00:28:57.722 [2024-07-25 01:29:20.003206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.722 [2024-07-25 01:29:20.003217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.722 qpair failed and we were unable to recover it. 00:28:57.722 [2024-07-25 01:29:20.003621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.722 [2024-07-25 01:29:20.003631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.722 qpair failed and we were unable to recover it. 00:28:57.722 [2024-07-25 01:29:20.004037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.722 [2024-07-25 01:29:20.004049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.722 qpair failed and we were unable to recover it. 00:28:57.722 [2024-07-25 01:29:20.004537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.722 [2024-07-25 01:29:20.004548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.722 qpair failed and we were unable to recover it. 00:28:57.722 [2024-07-25 01:29:20.005003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.722 [2024-07-25 01:29:20.005015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.722 qpair failed and we were unable to recover it. 00:28:57.722 [2024-07-25 01:29:20.005497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.722 [2024-07-25 01:29:20.005508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.722 qpair failed and we were unable to recover it. 00:28:57.722 [2024-07-25 01:29:20.005909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.722 [2024-07-25 01:29:20.005920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.722 qpair failed and we were unable to recover it. 00:28:57.722 [2024-07-25 01:29:20.006312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.722 [2024-07-25 01:29:20.006323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.722 qpair failed and we were unable to recover it. 00:28:57.722 [2024-07-25 01:29:20.006824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.722 [2024-07-25 01:29:20.006834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.722 qpair failed and we were unable to recover it. 00:28:57.722 [2024-07-25 01:29:20.007185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.722 [2024-07-25 01:29:20.007196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.722 qpair failed and we were unable to recover it. 00:28:57.722 [2024-07-25 01:29:20.007737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.722 [2024-07-25 01:29:20.007749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.722 qpair failed and we were unable to recover it. 00:28:57.722 [2024-07-25 01:29:20.008182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.722 [2024-07-25 01:29:20.008201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.722 qpair failed and we were unable to recover it. 00:28:57.722 [2024-07-25 01:29:20.008570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.722 [2024-07-25 01:29:20.008585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.722 qpair failed and we were unable to recover it. 00:28:57.722 [2024-07-25 01:29:20.010714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.723 [2024-07-25 01:29:20.010744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.723 qpair failed and we were unable to recover it. 00:28:57.723 [2024-07-25 01:29:20.011174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.723 [2024-07-25 01:29:20.011191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.723 qpair failed and we were unable to recover it. 00:28:57.723 [2024-07-25 01:29:20.013027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.723 [2024-07-25 01:29:20.013052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.723 qpair failed and we were unable to recover it. 00:28:57.723 [2024-07-25 01:29:20.013463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.723 [2024-07-25 01:29:20.013475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.723 qpair failed and we were unable to recover it. 00:28:57.723 [2024-07-25 01:29:20.013881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.723 [2024-07-25 01:29:20.013893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.723 qpair failed and we were unable to recover it. 00:28:57.723 [2024-07-25 01:29:20.014298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.723 [2024-07-25 01:29:20.014310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.723 qpair failed and we were unable to recover it. 00:28:57.723 [2024-07-25 01:29:20.014947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.723 [2024-07-25 01:29:20.014958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.723 qpair failed and we were unable to recover it. 00:28:57.723 [2024-07-25 01:29:20.015411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.723 [2024-07-25 01:29:20.015423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.723 qpair failed and we were unable to recover it. 00:28:57.723 [2024-07-25 01:29:20.015908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.723 [2024-07-25 01:29:20.015918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.723 qpair failed and we were unable to recover it. 00:28:57.723 [2024-07-25 01:29:20.016373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.723 [2024-07-25 01:29:20.016384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.723 qpair failed and we were unable to recover it. 00:28:57.723 [2024-07-25 01:29:20.016743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.723 [2024-07-25 01:29:20.016753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.723 qpair failed and we were unable to recover it. 00:28:57.723 [2024-07-25 01:29:20.017162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.723 [2024-07-25 01:29:20.017172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.723 qpair failed and we were unable to recover it. 00:28:57.723 [2024-07-25 01:29:20.017518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.723 [2024-07-25 01:29:20.017528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.723 qpair failed and we were unable to recover it. 00:28:57.723 [2024-07-25 01:29:20.017978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.723 [2024-07-25 01:29:20.017992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.723 qpair failed and we were unable to recover it. 00:28:57.723 [2024-07-25 01:29:20.018412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.723 [2024-07-25 01:29:20.018424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.723 qpair failed and we were unable to recover it. 00:28:57.723 [2024-07-25 01:29:20.018791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.723 [2024-07-25 01:29:20.018802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.723 qpair failed and we were unable to recover it. 00:28:57.723 [2024-07-25 01:29:20.019413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.723 [2024-07-25 01:29:20.019424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.723 qpair failed and we were unable to recover it. 00:28:57.723 [2024-07-25 01:29:20.019825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.723 [2024-07-25 01:29:20.019836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.723 qpair failed and we were unable to recover it. 00:28:57.723 [2024-07-25 01:29:20.020267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.723 [2024-07-25 01:29:20.020278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.723 qpair failed and we were unable to recover it. 00:28:57.723 [2024-07-25 01:29:20.020668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.723 [2024-07-25 01:29:20.020678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.723 qpair failed and we were unable to recover it. 00:28:57.723 [2024-07-25 01:29:20.021023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.723 [2024-07-25 01:29:20.021034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.723 qpair failed and we were unable to recover it. 00:28:57.723 [2024-07-25 01:29:20.021443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.723 [2024-07-25 01:29:20.021454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.723 qpair failed and we were unable to recover it. 00:28:57.723 [2024-07-25 01:29:20.021813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.723 [2024-07-25 01:29:20.021823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.723 qpair failed and we were unable to recover it. 00:28:57.723 [2024-07-25 01:29:20.022211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.723 [2024-07-25 01:29:20.022222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.723 qpair failed and we were unable to recover it. 00:28:57.723 [2024-07-25 01:29:20.022555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.723 [2024-07-25 01:29:20.022565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.723 qpair failed and we were unable to recover it. 00:28:57.723 [2024-07-25 01:29:20.022965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.723 [2024-07-25 01:29:20.022975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.723 qpair failed and we were unable to recover it. 00:28:57.723 [2024-07-25 01:29:20.023335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.723 [2024-07-25 01:29:20.023346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.723 qpair failed and we were unable to recover it. 00:28:57.723 [2024-07-25 01:29:20.023771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.723 [2024-07-25 01:29:20.023781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.723 qpair failed and we were unable to recover it. 00:28:57.723 [2024-07-25 01:29:20.024391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.723 [2024-07-25 01:29:20.024402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.723 qpair failed and we were unable to recover it. 00:28:57.723 [2024-07-25 01:29:20.024617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.723 [2024-07-25 01:29:20.024627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.723 qpair failed and we were unable to recover it. 00:28:57.723 [2024-07-25 01:29:20.025104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.723 [2024-07-25 01:29:20.025115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.723 qpair failed and we were unable to recover it. 00:28:57.723 [2024-07-25 01:29:20.025458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.723 [2024-07-25 01:29:20.025468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.723 qpair failed and we were unable to recover it. 00:28:57.723 [2024-07-25 01:29:20.025866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.723 [2024-07-25 01:29:20.025877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.723 qpair failed and we were unable to recover it. 00:28:57.723 [2024-07-25 01:29:20.026241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.723 [2024-07-25 01:29:20.026252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.723 qpair failed and we were unable to recover it. 00:28:57.723 [2024-07-25 01:29:20.026447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.723 [2024-07-25 01:29:20.026457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.723 qpair failed and we were unable to recover it. 00:28:57.723 [2024-07-25 01:29:20.026859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.723 [2024-07-25 01:29:20.026870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.723 qpair failed and we were unable to recover it. 00:28:57.723 [2024-07-25 01:29:20.027268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.723 [2024-07-25 01:29:20.027280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.724 qpair failed and we were unable to recover it. 00:28:57.724 [2024-07-25 01:29:20.027755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.724 [2024-07-25 01:29:20.027766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.724 qpair failed and we were unable to recover it. 00:28:57.724 [2024-07-25 01:29:20.028226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.724 [2024-07-25 01:29:20.028238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.724 qpair failed and we were unable to recover it. 00:28:57.724 [2024-07-25 01:29:20.028643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.724 [2024-07-25 01:29:20.028653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.724 qpair failed and we were unable to recover it. 00:28:57.724 [2024-07-25 01:29:20.029060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.724 [2024-07-25 01:29:20.029071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.724 qpair failed and we were unable to recover it. 00:28:57.724 [2024-07-25 01:29:20.029426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.724 [2024-07-25 01:29:20.029437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.724 qpair failed and we were unable to recover it. 00:28:57.724 [2024-07-25 01:29:20.029850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.724 [2024-07-25 01:29:20.029861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.724 qpair failed and we were unable to recover it. 00:28:57.724 [2024-07-25 01:29:20.030265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.724 [2024-07-25 01:29:20.030277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.724 qpair failed and we were unable to recover it. 00:28:57.724 [2024-07-25 01:29:20.030670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.724 [2024-07-25 01:29:20.030680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.724 qpair failed and we were unable to recover it. 00:28:57.724 [2024-07-25 01:29:20.031322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.724 [2024-07-25 01:29:20.031333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.724 qpair failed and we were unable to recover it. 00:28:57.724 [2024-07-25 01:29:20.031482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.724 [2024-07-25 01:29:20.031492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.724 qpair failed and we were unable to recover it. 00:28:57.724 [2024-07-25 01:29:20.031850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.724 [2024-07-25 01:29:20.031861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.724 qpair failed and we were unable to recover it. 00:28:57.724 [2024-07-25 01:29:20.032264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.724 [2024-07-25 01:29:20.032276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.724 qpair failed and we were unable to recover it. 00:28:57.724 [2024-07-25 01:29:20.032697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.724 [2024-07-25 01:29:20.032707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.724 qpair failed and we were unable to recover it. 00:28:57.724 [2024-07-25 01:29:20.033120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.724 [2024-07-25 01:29:20.033130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.724 qpair failed and we were unable to recover it. 00:28:57.724 [2024-07-25 01:29:20.033521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.724 [2024-07-25 01:29:20.033533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.724 qpair failed and we were unable to recover it. 00:28:57.724 [2024-07-25 01:29:20.033860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.724 [2024-07-25 01:29:20.033872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.724 qpair failed and we were unable to recover it. 00:28:57.724 [2024-07-25 01:29:20.034231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.724 [2024-07-25 01:29:20.034246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.724 qpair failed and we were unable to recover it. 00:28:57.724 [2024-07-25 01:29:20.034722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.724 [2024-07-25 01:29:20.034732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.724 qpair failed and we were unable to recover it. 00:28:57.724 [2024-07-25 01:29:20.035200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.724 [2024-07-25 01:29:20.035211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.724 qpair failed and we were unable to recover it. 00:28:57.724 [2024-07-25 01:29:20.035615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.724 [2024-07-25 01:29:20.035625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.724 qpair failed and we were unable to recover it. 00:28:57.724 [2024-07-25 01:29:20.035868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.724 [2024-07-25 01:29:20.035878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.724 qpair failed and we were unable to recover it. 00:28:57.724 01:29:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:57.724 [2024-07-25 01:29:20.036228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.724 [2024-07-25 01:29:20.036241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.724 qpair failed and we were unable to recover it. 00:28:57.724 01:29:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:57.724 [2024-07-25 01:29:20.036645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.724 [2024-07-25 01:29:20.036658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.724 qpair failed and we were unable to recover it. 00:28:57.724 01:29:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.724 [2024-07-25 01:29:20.037055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.724 [2024-07-25 01:29:20.037067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.724 qpair failed and we were unable to recover it. 00:28:57.724 01:29:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:57.724 [2024-07-25 01:29:20.037469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.724 [2024-07-25 01:29:20.037480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.724 qpair failed and we were unable to recover it. 00:28:57.724 [2024-07-25 01:29:20.037885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.724 [2024-07-25 01:29:20.037896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.724 qpair failed and we were unable to recover it. 00:28:57.724 [2024-07-25 01:29:20.038294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.724 [2024-07-25 01:29:20.038305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.724 qpair failed and we were unable to recover it. 00:28:57.724 [2024-07-25 01:29:20.038705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.724 [2024-07-25 01:29:20.038716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.724 qpair failed and we were unable to recover it. 00:28:57.724 [2024-07-25 01:29:20.039135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.724 [2024-07-25 01:29:20.039146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.724 qpair failed and we were unable to recover it. 00:28:57.724 [2024-07-25 01:29:20.039600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.724 [2024-07-25 01:29:20.039611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.724 qpair failed and we were unable to recover it. 00:28:57.724 [2024-07-25 01:29:20.039946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.724 [2024-07-25 01:29:20.039955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.724 qpair failed and we were unable to recover it. 00:28:57.724 [2024-07-25 01:29:20.040354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.724 [2024-07-25 01:29:20.040365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.724 qpair failed and we were unable to recover it. 00:28:57.724 [2024-07-25 01:29:20.040713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.724 [2024-07-25 01:29:20.040723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.724 qpair failed and we were unable to recover it. 00:28:57.724 [2024-07-25 01:29:20.041059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.724 [2024-07-25 01:29:20.041070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.724 qpair failed and we were unable to recover it. 00:28:57.724 [2024-07-25 01:29:20.041470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.724 [2024-07-25 01:29:20.041481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.724 qpair failed and we were unable to recover it. 00:28:57.724 [2024-07-25 01:29:20.041826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.725 [2024-07-25 01:29:20.041836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.725 qpair failed and we were unable to recover it. 00:28:57.725 [2024-07-25 01:29:20.042235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.725 [2024-07-25 01:29:20.042246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.725 qpair failed and we were unable to recover it. 00:28:57.725 [2024-07-25 01:29:20.042731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.725 [2024-07-25 01:29:20.042743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.725 qpair failed and we were unable to recover it. 00:28:57.725 [2024-07-25 01:29:20.043071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.725 [2024-07-25 01:29:20.043083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.725 qpair failed and we were unable to recover it. 00:28:57.725 [2024-07-25 01:29:20.043487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.725 [2024-07-25 01:29:20.043499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.725 qpair failed and we were unable to recover it. 00:28:57.725 [2024-07-25 01:29:20.043835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.725 [2024-07-25 01:29:20.043847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.725 qpair failed and we were unable to recover it. 00:28:57.725 [2024-07-25 01:29:20.044265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.725 [2024-07-25 01:29:20.044277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.725 qpair failed and we were unable to recover it. 00:28:57.725 [2024-07-25 01:29:20.044689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.725 [2024-07-25 01:29:20.044701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.725 qpair failed and we were unable to recover it. 00:28:57.725 [2024-07-25 01:29:20.045054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.725 [2024-07-25 01:29:20.045066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.725 qpair failed and we were unable to recover it. 00:28:57.725 [2024-07-25 01:29:20.045403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.725 [2024-07-25 01:29:20.045415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.725 qpair failed and we were unable to recover it. 00:28:57.725 [2024-07-25 01:29:20.045831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.725 [2024-07-25 01:29:20.045843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.725 qpair failed and we were unable to recover it. 00:28:57.725 [2024-07-25 01:29:20.046266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.725 [2024-07-25 01:29:20.046278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.725 qpair failed and we were unable to recover it. 00:28:57.725 [2024-07-25 01:29:20.046735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.725 [2024-07-25 01:29:20.046746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.725 qpair failed and we were unable to recover it. 00:28:57.725 [2024-07-25 01:29:20.047140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.725 [2024-07-25 01:29:20.047151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.725 qpair failed and we were unable to recover it. 00:28:57.725 [2024-07-25 01:29:20.047487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.725 [2024-07-25 01:29:20.047499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.725 qpair failed and we were unable to recover it. 00:28:57.725 [2024-07-25 01:29:20.047823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.725 [2024-07-25 01:29:20.047834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.725 qpair failed and we were unable to recover it. 00:28:57.725 [2024-07-25 01:29:20.048244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.725 [2024-07-25 01:29:20.048256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.725 qpair failed and we were unable to recover it. 00:28:57.725 [2024-07-25 01:29:20.048651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.725 [2024-07-25 01:29:20.048663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.725 qpair failed and we were unable to recover it. 00:28:57.725 [2024-07-25 01:29:20.049172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.725 [2024-07-25 01:29:20.049185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.725 qpair failed and we were unable to recover it. 00:28:57.725 [2024-07-25 01:29:20.049593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.725 [2024-07-25 01:29:20.049609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.725 qpair failed and we were unable to recover it. 00:28:57.725 [2024-07-25 01:29:20.050020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.725 [2024-07-25 01:29:20.050032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.725 qpair failed and we were unable to recover it. 00:28:57.725 [2024-07-25 01:29:20.050441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.725 [2024-07-25 01:29:20.050453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.725 qpair failed and we were unable to recover it. 00:28:57.725 [2024-07-25 01:29:20.050625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.725 [2024-07-25 01:29:20.050636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.725 qpair failed and we were unable to recover it. 00:28:57.725 [2024-07-25 01:29:20.050933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.725 [2024-07-25 01:29:20.050945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.725 qpair failed and we were unable to recover it. 00:28:57.725 [2024-07-25 01:29:20.051288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.725 [2024-07-25 01:29:20.051299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.725 qpair failed and we were unable to recover it. 00:28:57.725 [2024-07-25 01:29:20.051506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.725 [2024-07-25 01:29:20.051517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.725 qpair failed and we were unable to recover it. 00:28:57.725 [2024-07-25 01:29:20.051845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.725 [2024-07-25 01:29:20.051856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.725 qpair failed and we were unable to recover it. 00:28:57.725 [2024-07-25 01:29:20.052301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.725 [2024-07-25 01:29:20.052311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.725 qpair failed and we were unable to recover it. 00:28:57.725 [2024-07-25 01:29:20.052646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.725 [2024-07-25 01:29:20.052657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.725 qpair failed and we were unable to recover it. 00:28:57.725 [2024-07-25 01:29:20.053066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.725 [2024-07-25 01:29:20.053077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.725 qpair failed and we were unable to recover it. 00:28:57.725 [2024-07-25 01:29:20.053420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.726 [2024-07-25 01:29:20.053431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.726 qpair failed and we were unable to recover it. 00:28:57.726 [2024-07-25 01:29:20.053793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.726 [2024-07-25 01:29:20.053803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.726 qpair failed and we were unable to recover it. 00:28:57.726 [2024-07-25 01:29:20.054341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.726 [2024-07-25 01:29:20.054353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.726 qpair failed and we were unable to recover it. 00:28:57.726 Malloc0 00:28:57.726 [2024-07-25 01:29:20.054892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.726 [2024-07-25 01:29:20.054903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.726 qpair failed and we were unable to recover it. 00:28:57.726 01:29:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.726 01:29:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:57.726 01:29:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.726 [2024-07-25 01:29:20.056589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.726 [2024-07-25 01:29:20.056619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.726 qpair failed and we were unable to recover it. 00:28:57.726 01:29:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:57.726 [2024-07-25 01:29:20.057003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.726 [2024-07-25 01:29:20.057021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.726 qpair failed and we were unable to recover it. 00:28:57.726 [2024-07-25 01:29:20.057226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.726 [2024-07-25 01:29:20.057245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.726 qpair failed and we were unable to recover it. 00:28:57.726 [2024-07-25 01:29:20.057608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.726 [2024-07-25 01:29:20.057621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.726 qpair failed and we were unable to recover it. 00:28:57.726 [2024-07-25 01:29:20.057982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.726 [2024-07-25 01:29:20.057992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.726 qpair failed and we were unable to recover it. 00:28:57.726 [2024-07-25 01:29:20.058405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.726 [2024-07-25 01:29:20.058416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.726 qpair failed and we were unable to recover it. 00:28:57.726 [2024-07-25 01:29:20.058764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.726 [2024-07-25 01:29:20.058774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.726 qpair failed and we were unable to recover it. 00:28:57.726 [2024-07-25 01:29:20.059170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.726 [2024-07-25 01:29:20.059181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.726 qpair failed and we were unable to recover it. 00:28:57.726 [2024-07-25 01:29:20.059658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.726 [2024-07-25 01:29:20.059668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.726 qpair failed and we were unable to recover it. 00:28:57.726 [2024-07-25 01:29:20.060137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.726 [2024-07-25 01:29:20.060148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.726 qpair failed and we were unable to recover it. 00:28:57.726 [2024-07-25 01:29:20.060546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.726 [2024-07-25 01:29:20.060559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.726 qpair failed and we were unable to recover it. 00:28:57.726 [2024-07-25 01:29:20.061036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.726 [2024-07-25 01:29:20.061050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.726 qpair failed and we were unable to recover it. 00:28:57.726 [2024-07-25 01:29:20.061526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.726 [2024-07-25 01:29:20.061536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.726 qpair failed and we were unable to recover it. 00:28:57.726 [2024-07-25 01:29:20.061963] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:57.726 [2024-07-25 01:29:20.061991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.726 [2024-07-25 01:29:20.062001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.726 qpair failed and we were unable to recover it. 00:28:57.726 [2024-07-25 01:29:20.062337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.726 [2024-07-25 01:29:20.062348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.726 qpair failed and we were unable to recover it. 00:28:57.726 [2024-07-25 01:29:20.062766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.726 [2024-07-25 01:29:20.062776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.726 qpair failed and we were unable to recover it. 00:28:57.726 [2024-07-25 01:29:20.063117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.726 [2024-07-25 01:29:20.063127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.726 qpair failed and we were unable to recover it. 00:28:57.726 [2024-07-25 01:29:20.063530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.726 [2024-07-25 01:29:20.063540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.726 qpair failed and we were unable to recover it. 00:28:57.726 [2024-07-25 01:29:20.063932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.726 [2024-07-25 01:29:20.063942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.726 qpair failed and we were unable to recover it. 00:28:57.726 [2024-07-25 01:29:20.064420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.726 [2024-07-25 01:29:20.064431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.726 qpair failed and we were unable to recover it. 00:28:57.726 [2024-07-25 01:29:20.064847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.726 [2024-07-25 01:29:20.064857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.726 qpair failed and we were unable to recover it. 00:28:57.726 [2024-07-25 01:29:20.065245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.726 [2024-07-25 01:29:20.065255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.726 qpair failed and we were unable to recover it. 00:28:57.726 [2024-07-25 01:29:20.065687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.726 [2024-07-25 01:29:20.065697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.726 qpair failed and we were unable to recover it. 00:28:57.726 [2024-07-25 01:29:20.066101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.726 [2024-07-25 01:29:20.066113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.726 qpair failed and we were unable to recover it. 00:28:57.726 [2024-07-25 01:29:20.066472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.726 [2024-07-25 01:29:20.066482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.726 qpair failed and we were unable to recover it. 00:28:57.726 [2024-07-25 01:29:20.066818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.726 [2024-07-25 01:29:20.066827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.726 qpair failed and we were unable to recover it. 00:28:57.726 [2024-07-25 01:29:20.067176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.726 [2024-07-25 01:29:20.067186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.726 qpair failed and we were unable to recover it. 00:28:57.726 01:29:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.726 [2024-07-25 01:29:20.067679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.726 [2024-07-25 01:29:20.067689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.726 qpair failed and we were unable to recover it. 00:28:57.726 01:29:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:57.726 [2024-07-25 01:29:20.068084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.726 01:29:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.726 [2024-07-25 01:29:20.068096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.726 qpair failed and we were unable to recover it. 00:28:57.726 01:29:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:57.726 [2024-07-25 01:29:20.068482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.727 [2024-07-25 01:29:20.068492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.727 qpair failed and we were unable to recover it. 00:28:57.727 [2024-07-25 01:29:20.068969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.727 [2024-07-25 01:29:20.068979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.727 qpair failed and we were unable to recover it. 00:28:57.727 [2024-07-25 01:29:20.069326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.727 [2024-07-25 01:29:20.069336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.727 qpair failed and we were unable to recover it. 00:28:57.727 [2024-07-25 01:29:20.069725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.727 [2024-07-25 01:29:20.069735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.727 qpair failed and we were unable to recover it. 00:28:57.727 [2024-07-25 01:29:20.070128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.727 [2024-07-25 01:29:20.070139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.727 qpair failed and we were unable to recover it. 00:28:57.727 [2024-07-25 01:29:20.070483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.727 [2024-07-25 01:29:20.070493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.727 qpair failed and we were unable to recover it. 00:28:57.727 [2024-07-25 01:29:20.070703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.727 [2024-07-25 01:29:20.070713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.727 qpair failed and we were unable to recover it. 00:28:57.727 [2024-07-25 01:29:20.071113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.727 [2024-07-25 01:29:20.071123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.727 qpair failed and we were unable to recover it. 00:28:57.727 [2024-07-25 01:29:20.071541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.727 [2024-07-25 01:29:20.071551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.727 qpair failed and we were unable to recover it. 00:28:57.727 [2024-07-25 01:29:20.071872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.727 [2024-07-25 01:29:20.071882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.727 qpair failed and we were unable to recover it. 00:28:57.727 [2024-07-25 01:29:20.072342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.727 [2024-07-25 01:29:20.072352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.727 qpair failed and we were unable to recover it. 00:28:57.727 [2024-07-25 01:29:20.072802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.727 [2024-07-25 01:29:20.072811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.727 qpair failed and we were unable to recover it. 00:28:57.727 [2024-07-25 01:29:20.073105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.727 [2024-07-25 01:29:20.073115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.727 qpair failed and we were unable to recover it. 00:28:57.727 [2024-07-25 01:29:20.073510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.727 [2024-07-25 01:29:20.073520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.727 qpair failed and we were unable to recover it. 00:28:57.727 [2024-07-25 01:29:20.073981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.727 [2024-07-25 01:29:20.073991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.727 qpair failed and we were unable to recover it. 00:28:57.727 [2024-07-25 01:29:20.074347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.727 [2024-07-25 01:29:20.074357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.727 qpair failed and we were unable to recover it. 00:28:57.727 [2024-07-25 01:29:20.074826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.727 [2024-07-25 01:29:20.074836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.727 qpair failed and we were unable to recover it. 00:28:57.727 [2024-07-25 01:29:20.075241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.727 [2024-07-25 01:29:20.075252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.727 qpair failed and we were unable to recover it. 00:28:57.727 01:29:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.727 [2024-07-25 01:29:20.075725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.727 [2024-07-25 01:29:20.075735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.727 qpair failed and we were unable to recover it. 00:28:57.727 01:29:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:57.727 01:29:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.727 [2024-07-25 01:29:20.076142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.727 [2024-07-25 01:29:20.076153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.727 qpair failed and we were unable to recover it. 00:28:57.727 01:29:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:57.727 [2024-07-25 01:29:20.076571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.727 [2024-07-25 01:29:20.076582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.727 qpair failed and we were unable to recover it. 00:28:57.727 [2024-07-25 01:29:20.077007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.727 [2024-07-25 01:29:20.077017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.727 qpair failed and we were unable to recover it. 00:28:57.727 [2024-07-25 01:29:20.077221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.727 [2024-07-25 01:29:20.077232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.727 qpair failed and we were unable to recover it. 00:28:57.727 [2024-07-25 01:29:20.077637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.727 [2024-07-25 01:29:20.077647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.727 qpair failed and we were unable to recover it. 00:28:57.727 [2024-07-25 01:29:20.078054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.727 [2024-07-25 01:29:20.078065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.727 qpair failed and we were unable to recover it. 00:28:57.727 [2024-07-25 01:29:20.078401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.727 [2024-07-25 01:29:20.078411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.727 qpair failed and we were unable to recover it. 00:28:57.727 [2024-07-25 01:29:20.078808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.727 [2024-07-25 01:29:20.078818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.727 qpair failed and we were unable to recover it. 00:28:57.727 [2024-07-25 01:29:20.079227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.727 [2024-07-25 01:29:20.079237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.727 qpair failed and we were unable to recover it. 00:28:57.727 [2024-07-25 01:29:20.079642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.727 [2024-07-25 01:29:20.079652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.727 qpair failed and we were unable to recover it. 00:28:57.727 [2024-07-25 01:29:20.080054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.727 [2024-07-25 01:29:20.080064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.727 qpair failed and we were unable to recover it. 00:28:57.727 [2024-07-25 01:29:20.080496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.727 [2024-07-25 01:29:20.080506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.727 qpair failed and we were unable to recover it. 00:28:57.727 [2024-07-25 01:29:20.080983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.727 [2024-07-25 01:29:20.080993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.727 qpair failed and we were unable to recover it. 00:28:57.727 [2024-07-25 01:29:20.081230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.727 [2024-07-25 01:29:20.081240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.727 qpair failed and we were unable to recover it. 00:28:57.727 [2024-07-25 01:29:20.081636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.727 [2024-07-25 01:29:20.081646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.727 qpair failed and we were unable to recover it. 00:28:57.727 [2024-07-25 01:29:20.082071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.727 [2024-07-25 01:29:20.082081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.727 qpair failed and we were unable to recover it. 00:28:57.727 [2024-07-25 01:29:20.082569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.727 [2024-07-25 01:29:20.082580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.727 qpair failed and we were unable to recover it. 00:28:57.727 [2024-07-25 01:29:20.082977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.728 [2024-07-25 01:29:20.082987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.728 qpair failed and we were unable to recover it. 00:28:57.728 [2024-07-25 01:29:20.083336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.728 [2024-07-25 01:29:20.083346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.728 qpair failed and we were unable to recover it. 00:28:57.728 01:29:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.728 [2024-07-25 01:29:20.083738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.728 [2024-07-25 01:29:20.083749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.728 qpair failed and we were unable to recover it. 00:28:57.728 01:29:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:57.728 01:29:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.728 [2024-07-25 01:29:20.084153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.728 [2024-07-25 01:29:20.084164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.728 qpair failed and we were unable to recover it. 00:28:57.728 01:29:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:57.728 [2024-07-25 01:29:20.084597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.728 [2024-07-25 01:29:20.084608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.728 qpair failed and we were unable to recover it. 00:28:57.728 [2024-07-25 01:29:20.085006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.728 [2024-07-25 01:29:20.085016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.728 qpair failed and we were unable to recover it. 00:28:57.728 [2024-07-25 01:29:20.085467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.728 [2024-07-25 01:29:20.085478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.728 qpair failed and we were unable to recover it. 00:28:57.728 [2024-07-25 01:29:20.085865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.728 [2024-07-25 01:29:20.085875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.728 qpair failed and we were unable to recover it. 00:28:57.728 [2024-07-25 01:29:20.086270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.728 [2024-07-25 01:29:20.086280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.728 qpair failed and we were unable to recover it. 00:28:57.728 [2024-07-25 01:29:20.086682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.728 [2024-07-25 01:29:20.086692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.728 qpair failed and we were unable to recover it. 00:28:57.728 [2024-07-25 01:29:20.086913] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:57.728 [2024-07-25 01:29:20.087168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.728 [2024-07-25 01:29:20.087179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e4000b90 with addr=10.0.0.2, port=4420 00:28:57.728 qpair failed and we were unable to recover it. 00:28:57.728 01:29:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.728 01:29:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:57.728 01:29:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.728 01:29:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:57.728 [2024-07-25 01:29:20.092599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.728 [2024-07-25 01:29:20.092754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.728 [2024-07-25 01:29:20.092777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.728 [2024-07-25 01:29:20.092785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.728 [2024-07-25 01:29:20.092792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83e4000b90 00:28:57.728 [2024-07-25 01:29:20.092814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:57.728 qpair failed and we were unable to recover it. 00:28:57.728 01:29:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.728 01:29:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1058578 00:28:57.728 [2024-07-25 01:29:20.102511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.728 [2024-07-25 01:29:20.102656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.728 [2024-07-25 01:29:20.102674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.728 [2024-07-25 01:29:20.102681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.728 [2024-07-25 01:29:20.102686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83e4000b90 00:28:57.728 [2024-07-25 01:29:20.102704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:57.728 qpair failed and we were unable to recover it. 00:28:57.728 [2024-07-25 01:29:20.112570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.728 [2024-07-25 01:29:20.112725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.728 [2024-07-25 01:29:20.112742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.728 [2024-07-25 01:29:20.112749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.728 [2024-07-25 01:29:20.112755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83e4000b90 00:28:57.728 [2024-07-25 01:29:20.112771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:57.728 qpair failed and we were unable to recover it. 00:28:57.728 [2024-07-25 01:29:20.122557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.728 [2024-07-25 01:29:20.122724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.728 [2024-07-25 01:29:20.122741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.728 [2024-07-25 01:29:20.122748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.728 [2024-07-25 01:29:20.122754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83e4000b90 00:28:57.728 [2024-07-25 01:29:20.122771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:57.728 qpair failed and we were unable to recover it. 00:28:57.728 [2024-07-25 01:29:20.132610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.728 [2024-07-25 01:29:20.132776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.728 [2024-07-25 01:29:20.132794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.728 [2024-07-25 01:29:20.132802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.728 [2024-07-25 01:29:20.132808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83e4000b90 00:28:57.728 [2024-07-25 01:29:20.132824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:57.728 qpair failed and we were unable to recover it. 00:28:57.728 [2024-07-25 01:29:20.142605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.728 [2024-07-25 01:29:20.142744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.728 [2024-07-25 01:29:20.142762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.728 [2024-07-25 01:29:20.142770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.728 [2024-07-25 01:29:20.142776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83e4000b90 00:28:57.728 [2024-07-25 01:29:20.142792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:57.728 qpair failed and we were unable to recover it. 00:28:57.728 [2024-07-25 01:29:20.152604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.728 [2024-07-25 01:29:20.152744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.728 [2024-07-25 01:29:20.152763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.728 [2024-07-25 01:29:20.152775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.728 [2024-07-25 01:29:20.152781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83e4000b90 00:28:57.728 [2024-07-25 01:29:20.152798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:57.728 qpair failed and we were unable to recover it. 00:28:57.728 [2024-07-25 01:29:20.162581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.728 [2024-07-25 01:29:20.162722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.728 [2024-07-25 01:29:20.162738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.728 [2024-07-25 01:29:20.162746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.728 [2024-07-25 01:29:20.162752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83e4000b90 00:28:57.728 [2024-07-25 01:29:20.162768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:57.728 qpair failed and we were unable to recover it. 00:28:57.728 [2024-07-25 01:29:20.172659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.729 [2024-07-25 01:29:20.172820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.729 [2024-07-25 01:29:20.172836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.729 [2024-07-25 01:29:20.172843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.729 [2024-07-25 01:29:20.172850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83e4000b90 00:28:57.729 [2024-07-25 01:29:20.172866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:57.729 qpair failed and we were unable to recover it. 00:28:57.729 [2024-07-25 01:29:20.182699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.729 [2024-07-25 01:29:20.182840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.729 [2024-07-25 01:29:20.182858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.729 [2024-07-25 01:29:20.182865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.729 [2024-07-25 01:29:20.182872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83e4000b90 00:28:57.729 [2024-07-25 01:29:20.182888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:57.729 qpair failed and we were unable to recover it. 00:28:57.729 [2024-07-25 01:29:20.192731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.729 [2024-07-25 01:29:20.192886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.729 [2024-07-25 01:29:20.192903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.729 [2024-07-25 01:29:20.192910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.729 [2024-07-25 01:29:20.192916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83e4000b90 00:28:57.729 [2024-07-25 01:29:20.192932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:57.729 qpair failed and we were unable to recover it. 00:28:57.991 [2024-07-25 01:29:20.202819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.991 [2024-07-25 01:29:20.203006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.991 [2024-07-25 01:29:20.203036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.991 [2024-07-25 01:29:20.203057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.991 [2024-07-25 01:29:20.203067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83dc000b90 00:28:57.991 [2024-07-25 01:29:20.203093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:57.991 qpair failed and we were unable to recover it. 00:28:57.991 [2024-07-25 01:29:20.212788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.991 [2024-07-25 01:29:20.212964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.991 [2024-07-25 01:29:20.212983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.991 [2024-07-25 01:29:20.212991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.991 [2024-07-25 01:29:20.212998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83dc000b90 00:28:57.991 [2024-07-25 01:29:20.213016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:57.991 qpair failed and we were unable to recover it. 00:28:57.991 [2024-07-25 01:29:20.222832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.991 [2024-07-25 01:29:20.222977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.991 [2024-07-25 01:29:20.222994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.991 [2024-07-25 01:29:20.223002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.991 [2024-07-25 01:29:20.223008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83dc000b90 00:28:57.991 [2024-07-25 01:29:20.223025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:57.991 qpair failed and we were unable to recover it. 00:28:57.991 [2024-07-25 01:29:20.232853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.991 [2024-07-25 01:29:20.232995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.991 [2024-07-25 01:29:20.233012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.991 [2024-07-25 01:29:20.233020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.991 [2024-07-25 01:29:20.233026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83dc000b90 00:28:57.991 [2024-07-25 01:29:20.233052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:57.991 qpair failed and we were unable to recover it. 00:28:57.991 [2024-07-25 01:29:20.242907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.991 [2024-07-25 01:29:20.243058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.991 [2024-07-25 01:29:20.243079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.991 [2024-07-25 01:29:20.243087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.991 [2024-07-25 01:29:20.243093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83dc000b90 00:28:57.991 [2024-07-25 01:29:20.243110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:57.991 qpair failed and we were unable to recover it. 00:28:57.991 [2024-07-25 01:29:20.252889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.991 [2024-07-25 01:29:20.253034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.991 [2024-07-25 01:29:20.253059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.991 [2024-07-25 01:29:20.253066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.991 [2024-07-25 01:29:20.253072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83dc000b90 00:28:57.991 [2024-07-25 01:29:20.253089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:57.991 qpair failed and we were unable to recover it. 00:28:57.991 [2024-07-25 01:29:20.262942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.991 [2024-07-25 01:29:20.263092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.991 [2024-07-25 01:29:20.263110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.991 [2024-07-25 01:29:20.263117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.991 [2024-07-25 01:29:20.263123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83dc000b90 00:28:57.991 [2024-07-25 01:29:20.263140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:57.991 qpair failed and we were unable to recover it. 00:28:57.991 [2024-07-25 01:29:20.272961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.991 [2024-07-25 01:29:20.273104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.991 [2024-07-25 01:29:20.273124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.991 [2024-07-25 01:29:20.273134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.991 [2024-07-25 01:29:20.273142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83dc000b90 00:28:57.991 [2024-07-25 01:29:20.273164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:57.991 qpair failed and we were unable to recover it. 00:28:57.991 [2024-07-25 01:29:20.283053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.991 [2024-07-25 01:29:20.283192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.991 [2024-07-25 01:29:20.283210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.991 [2024-07-25 01:29:20.283217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.991 [2024-07-25 01:29:20.283223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83dc000b90 00:28:57.991 [2024-07-25 01:29:20.283243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:57.991 qpair failed and we were unable to recover it. 00:28:57.991 [2024-07-25 01:29:20.293015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.991 [2024-07-25 01:29:20.293159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.991 [2024-07-25 01:29:20.293177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.991 [2024-07-25 01:29:20.293184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.991 [2024-07-25 01:29:20.293190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83dc000b90 00:28:57.991 [2024-07-25 01:29:20.293207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:57.991 qpair failed and we were unable to recover it. 00:28:57.991 [2024-07-25 01:29:20.303037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.991 [2024-07-25 01:29:20.303181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.991 [2024-07-25 01:29:20.303198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.991 [2024-07-25 01:29:20.303206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.991 [2024-07-25 01:29:20.303212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83dc000b90 00:28:57.991 [2024-07-25 01:29:20.303229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:57.991 qpair failed and we were unable to recover it. 00:28:57.991 [2024-07-25 01:29:20.313093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.991 [2024-07-25 01:29:20.313238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.991 [2024-07-25 01:29:20.313256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.991 [2024-07-25 01:29:20.313263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.991 [2024-07-25 01:29:20.313269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83dc000b90 00:28:57.991 [2024-07-25 01:29:20.313286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:57.991 qpair failed and we were unable to recover it. 00:28:57.991 [2024-07-25 01:29:20.323145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.992 [2024-07-25 01:29:20.323316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.992 [2024-07-25 01:29:20.323334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.992 [2024-07-25 01:29:20.323342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.992 [2024-07-25 01:29:20.323349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83dc000b90 00:28:57.992 [2024-07-25 01:29:20.323366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:57.992 qpair failed and we were unable to recover it. 00:28:57.992 [2024-07-25 01:29:20.333452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.992 [2024-07-25 01:29:20.333606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.992 [2024-07-25 01:29:20.333628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.992 [2024-07-25 01:29:20.333635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.992 [2024-07-25 01:29:20.333642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83dc000b90 00:28:57.992 [2024-07-25 01:29:20.333661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:57.992 qpair failed and we were unable to recover it. 00:28:57.992 [2024-07-25 01:29:20.343210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.992 [2024-07-25 01:29:20.343349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.992 [2024-07-25 01:29:20.343367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.992 [2024-07-25 01:29:20.343374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.992 [2024-07-25 01:29:20.343380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83dc000b90 00:28:57.992 [2024-07-25 01:29:20.343398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:57.992 qpair failed and we were unable to recover it. 00:28:57.992 [2024-07-25 01:29:20.353175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.992 [2024-07-25 01:29:20.353325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.992 [2024-07-25 01:29:20.353342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.992 [2024-07-25 01:29:20.353349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.992 [2024-07-25 01:29:20.353355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83dc000b90 00:28:57.992 [2024-07-25 01:29:20.353373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:57.992 qpair failed and we were unable to recover it. 00:28:57.992 [2024-07-25 01:29:20.363241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.992 [2024-07-25 01:29:20.363415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.992 [2024-07-25 01:29:20.363433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.992 [2024-07-25 01:29:20.363440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.992 [2024-07-25 01:29:20.363446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83dc000b90 00:28:57.992 [2024-07-25 01:29:20.363463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:57.992 qpair failed and we were unable to recover it. 00:28:57.992 [2024-07-25 01:29:20.373245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.992 [2024-07-25 01:29:20.373388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.992 [2024-07-25 01:29:20.373405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.992 [2024-07-25 01:29:20.373412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.992 [2024-07-25 01:29:20.373423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83dc000b90 00:28:57.992 [2024-07-25 01:29:20.373440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:57.992 qpair failed and we were unable to recover it. 00:28:57.992 [2024-07-25 01:29:20.383267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.992 [2024-07-25 01:29:20.383453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.992 [2024-07-25 01:29:20.383474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.992 [2024-07-25 01:29:20.383482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.992 [2024-07-25 01:29:20.383489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83dc000b90 00:28:57.992 [2024-07-25 01:29:20.383506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:57.992 qpair failed and we were unable to recover it. 00:28:57.992 [2024-07-25 01:29:20.393328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.992 [2024-07-25 01:29:20.393473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.992 [2024-07-25 01:29:20.393491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.992 [2024-07-25 01:29:20.393498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.992 [2024-07-25 01:29:20.393504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83dc000b90 00:28:57.992 [2024-07-25 01:29:20.393521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:57.992 qpair failed and we were unable to recover it. 00:28:57.992 [2024-07-25 01:29:20.403300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.992 [2024-07-25 01:29:20.403442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.992 [2024-07-25 01:29:20.403459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.992 [2024-07-25 01:29:20.403466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.992 [2024-07-25 01:29:20.403472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83dc000b90 00:28:57.992 [2024-07-25 01:29:20.403489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:57.992 qpair failed and we were unable to recover it. 00:28:57.992 [2024-07-25 01:29:20.413294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.992 [2024-07-25 01:29:20.413437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.992 [2024-07-25 01:29:20.413454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.992 [2024-07-25 01:29:20.413461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.992 [2024-07-25 01:29:20.413467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83dc000b90 00:28:57.992 [2024-07-25 01:29:20.413484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:57.992 qpair failed and we were unable to recover it. 00:28:57.992 [2024-07-25 01:29:20.423409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.992 [2024-07-25 01:29:20.423561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.992 [2024-07-25 01:29:20.423578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.992 [2024-07-25 01:29:20.423586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.992 [2024-07-25 01:29:20.423592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83dc000b90 00:28:57.992 [2024-07-25 01:29:20.423609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:57.992 qpair failed and we were unable to recover it. 00:28:57.992 [2024-07-25 01:29:20.433342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.992 [2024-07-25 01:29:20.433484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.992 [2024-07-25 01:29:20.433501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.992 [2024-07-25 01:29:20.433509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.992 [2024-07-25 01:29:20.433515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83dc000b90 00:28:57.992 [2024-07-25 01:29:20.433532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:57.992 qpair failed and we were unable to recover it. 00:28:57.992 [2024-07-25 01:29:20.443375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.992 [2024-07-25 01:29:20.443559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.992 [2024-07-25 01:29:20.443578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.992 [2024-07-25 01:29:20.443585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.992 [2024-07-25 01:29:20.443592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83dc000b90 00:28:57.992 [2024-07-25 01:29:20.443609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:57.992 qpair failed and we were unable to recover it. 00:28:57.992 [2024-07-25 01:29:20.453442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.992 [2024-07-25 01:29:20.453579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.992 [2024-07-25 01:29:20.453597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.992 [2024-07-25 01:29:20.453605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.993 [2024-07-25 01:29:20.453611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83dc000b90 00:28:57.993 [2024-07-25 01:29:20.453628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:57.993 qpair failed and we were unable to recover it. 00:28:57.993 [2024-07-25 01:29:20.463499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.993 [2024-07-25 01:29:20.463653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.993 [2024-07-25 01:29:20.463672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.993 [2024-07-25 01:29:20.463679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.993 [2024-07-25 01:29:20.463688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83dc000b90 00:28:57.993 [2024-07-25 01:29:20.463706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:57.993 qpair failed and we were unable to recover it. 00:28:57.993 [2024-07-25 01:29:20.473499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.993 [2024-07-25 01:29:20.473637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.993 [2024-07-25 01:29:20.473654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.993 [2024-07-25 01:29:20.473661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.993 [2024-07-25 01:29:20.473667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83dc000b90 00:28:57.993 [2024-07-25 01:29:20.473684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:57.993 qpair failed and we were unable to recover it. 00:28:58.254 [2024-07-25 01:29:20.483514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.254 [2024-07-25 01:29:20.483696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.254 [2024-07-25 01:29:20.483715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.254 [2024-07-25 01:29:20.483722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.254 [2024-07-25 01:29:20.483728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83dc000b90 00:28:58.254 [2024-07-25 01:29:20.483746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:58.254 qpair failed and we were unable to recover it. 00:28:58.254 [2024-07-25 01:29:20.493510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.254 [2024-07-25 01:29:20.493645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.254 [2024-07-25 01:29:20.493663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.254 [2024-07-25 01:29:20.493670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.254 [2024-07-25 01:29:20.493676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83dc000b90 00:28:58.254 [2024-07-25 01:29:20.493693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:58.254 qpair failed and we were unable to recover it. 00:28:58.254 [2024-07-25 01:29:20.503603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.254 [2024-07-25 01:29:20.503743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.254 [2024-07-25 01:29:20.503760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.254 [2024-07-25 01:29:20.503767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.254 [2024-07-25 01:29:20.503773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83dc000b90 00:28:58.254 [2024-07-25 01:29:20.503791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:58.254 qpair failed and we were unable to recover it. 00:28:58.254 [2024-07-25 01:29:20.513642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.254 [2024-07-25 01:29:20.513848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.254 [2024-07-25 01:29:20.513879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.254 [2024-07-25 01:29:20.513891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.254 [2024-07-25 01:29:20.513900] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.254 [2024-07-25 01:29:20.513925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.254 qpair failed and we were unable to recover it. 00:28:58.254 [2024-07-25 01:29:20.524071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.254 [2024-07-25 01:29:20.524211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.254 [2024-07-25 01:29:20.524230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.254 [2024-07-25 01:29:20.524238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.254 [2024-07-25 01:29:20.524244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.254 [2024-07-25 01:29:20.524262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.254 qpair failed and we were unable to recover it. 00:28:58.254 [2024-07-25 01:29:20.533665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.255 [2024-07-25 01:29:20.534009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.255 [2024-07-25 01:29:20.534028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.255 [2024-07-25 01:29:20.534035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.255 [2024-07-25 01:29:20.534041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.255 [2024-07-25 01:29:20.534063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-07-25 01:29:20.543654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.255 [2024-07-25 01:29:20.543788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.255 [2024-07-25 01:29:20.543806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.255 [2024-07-25 01:29:20.543813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.255 [2024-07-25 01:29:20.543819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.255 [2024-07-25 01:29:20.543836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-07-25 01:29:20.553722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.255 [2024-07-25 01:29:20.553861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.255 [2024-07-25 01:29:20.553879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.255 [2024-07-25 01:29:20.553889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.255 [2024-07-25 01:29:20.553895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.255 [2024-07-25 01:29:20.553912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-07-25 01:29:20.563794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.255 [2024-07-25 01:29:20.563933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.255 [2024-07-25 01:29:20.563951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.255 [2024-07-25 01:29:20.563958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.255 [2024-07-25 01:29:20.563964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.255 [2024-07-25 01:29:20.563980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-07-25 01:29:20.573822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.255 [2024-07-25 01:29:20.573964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.255 [2024-07-25 01:29:20.573982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.255 [2024-07-25 01:29:20.573989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.255 [2024-07-25 01:29:20.573995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.255 [2024-07-25 01:29:20.574011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-07-25 01:29:20.583825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.255 [2024-07-25 01:29:20.584009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.255 [2024-07-25 01:29:20.584029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.255 [2024-07-25 01:29:20.584036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.255 [2024-07-25 01:29:20.584048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.255 [2024-07-25 01:29:20.584065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-07-25 01:29:20.593869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.255 [2024-07-25 01:29:20.594007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.255 [2024-07-25 01:29:20.594024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.255 [2024-07-25 01:29:20.594031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.255 [2024-07-25 01:29:20.594038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.255 [2024-07-25 01:29:20.594059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-07-25 01:29:20.603904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.255 [2024-07-25 01:29:20.604048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.255 [2024-07-25 01:29:20.604065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.255 [2024-07-25 01:29:20.604073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.255 [2024-07-25 01:29:20.604079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.255 [2024-07-25 01:29:20.604096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-07-25 01:29:20.613900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.255 [2024-07-25 01:29:20.614040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.255 [2024-07-25 01:29:20.614065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.255 [2024-07-25 01:29:20.614073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.255 [2024-07-25 01:29:20.614079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.255 [2024-07-25 01:29:20.614095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-07-25 01:29:20.623955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.255 [2024-07-25 01:29:20.624103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.255 [2024-07-25 01:29:20.624121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.255 [2024-07-25 01:29:20.624128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.255 [2024-07-25 01:29:20.624134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.255 [2024-07-25 01:29:20.624151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-07-25 01:29:20.633980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.255 [2024-07-25 01:29:20.634126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.255 [2024-07-25 01:29:20.634144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.255 [2024-07-25 01:29:20.634152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.255 [2024-07-25 01:29:20.634157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.255 [2024-07-25 01:29:20.634174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-07-25 01:29:20.644012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.255 [2024-07-25 01:29:20.644153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.255 [2024-07-25 01:29:20.644172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.255 [2024-07-25 01:29:20.644183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.255 [2024-07-25 01:29:20.644189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.255 [2024-07-25 01:29:20.644206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-07-25 01:29:20.654041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.255 [2024-07-25 01:29:20.654190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.255 [2024-07-25 01:29:20.654208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.255 [2024-07-25 01:29:20.654215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.255 [2024-07-25 01:29:20.654221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.255 [2024-07-25 01:29:20.654238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.255 qpair failed and we were unable to recover it. 00:28:58.255 [2024-07-25 01:29:20.664002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.255 [2024-07-25 01:29:20.664149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.255 [2024-07-25 01:29:20.664173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.255 [2024-07-25 01:29:20.664180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.256 [2024-07-25 01:29:20.664186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.256 [2024-07-25 01:29:20.664203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-07-25 01:29:20.674311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.256 [2024-07-25 01:29:20.674452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.256 [2024-07-25 01:29:20.674470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.256 [2024-07-25 01:29:20.674477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.256 [2024-07-25 01:29:20.674482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.256 [2024-07-25 01:29:20.674499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-07-25 01:29:20.684071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.256 [2024-07-25 01:29:20.684211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.256 [2024-07-25 01:29:20.684229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.256 [2024-07-25 01:29:20.684236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.256 [2024-07-25 01:29:20.684241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.256 [2024-07-25 01:29:20.684258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-07-25 01:29:20.694168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.256 [2024-07-25 01:29:20.694314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.256 [2024-07-25 01:29:20.694331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.256 [2024-07-25 01:29:20.694338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.256 [2024-07-25 01:29:20.694344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.256 [2024-07-25 01:29:20.694360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-07-25 01:29:20.704120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.256 [2024-07-25 01:29:20.704254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.256 [2024-07-25 01:29:20.704272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.256 [2024-07-25 01:29:20.704279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.256 [2024-07-25 01:29:20.704285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.256 [2024-07-25 01:29:20.704301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-07-25 01:29:20.714178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.256 [2024-07-25 01:29:20.714319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.256 [2024-07-25 01:29:20.714337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.256 [2024-07-25 01:29:20.714344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.256 [2024-07-25 01:29:20.714349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.256 [2024-07-25 01:29:20.714366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-07-25 01:29:20.724197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.256 [2024-07-25 01:29:20.724336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.256 [2024-07-25 01:29:20.724354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.256 [2024-07-25 01:29:20.724361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.256 [2024-07-25 01:29:20.724367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.256 [2024-07-25 01:29:20.724384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-07-25 01:29:20.734291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.256 [2024-07-25 01:29:20.734428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.256 [2024-07-25 01:29:20.734449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.256 [2024-07-25 01:29:20.734456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.256 [2024-07-25 01:29:20.734462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.256 [2024-07-25 01:29:20.734479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.256 [2024-07-25 01:29:20.744316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.256 [2024-07-25 01:29:20.744480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.256 [2024-07-25 01:29:20.744498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.256 [2024-07-25 01:29:20.744505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.256 [2024-07-25 01:29:20.744511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.256 [2024-07-25 01:29:20.744527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.256 qpair failed and we were unable to recover it. 00:28:58.517 [2024-07-25 01:29:20.754341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.517 [2024-07-25 01:29:20.754476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.517 [2024-07-25 01:29:20.754494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.517 [2024-07-25 01:29:20.754501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.518 [2024-07-25 01:29:20.754507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.518 [2024-07-25 01:29:20.754524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.518 qpair failed and we were unable to recover it. 00:28:58.518 [2024-07-25 01:29:20.764552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.518 [2024-07-25 01:29:20.764734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.518 [2024-07-25 01:29:20.764753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.518 [2024-07-25 01:29:20.764760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.518 [2024-07-25 01:29:20.764766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.518 [2024-07-25 01:29:20.764783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.518 qpair failed and we were unable to recover it. 00:28:58.518 [2024-07-25 01:29:20.774380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.518 [2024-07-25 01:29:20.774522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.518 [2024-07-25 01:29:20.774539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.518 [2024-07-25 01:29:20.774546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.518 [2024-07-25 01:29:20.774552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.518 [2024-07-25 01:29:20.774568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.518 qpair failed and we were unable to recover it. 00:28:58.518 [2024-07-25 01:29:20.784352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.518 [2024-07-25 01:29:20.784492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.518 [2024-07-25 01:29:20.784509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.518 [2024-07-25 01:29:20.784516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.518 [2024-07-25 01:29:20.784522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.518 [2024-07-25 01:29:20.784538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.518 qpair failed and we were unable to recover it. 00:28:58.518 [2024-07-25 01:29:20.794384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.518 [2024-07-25 01:29:20.794524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.518 [2024-07-25 01:29:20.794542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.518 [2024-07-25 01:29:20.794549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.518 [2024-07-25 01:29:20.794554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.518 [2024-07-25 01:29:20.794571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.518 qpair failed and we were unable to recover it. 00:28:58.518 [2024-07-25 01:29:20.804471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.518 [2024-07-25 01:29:20.804616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.518 [2024-07-25 01:29:20.804633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.518 [2024-07-25 01:29:20.804641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.518 [2024-07-25 01:29:20.804646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.518 [2024-07-25 01:29:20.804662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.518 qpair failed and we were unable to recover it. 00:28:58.518 [2024-07-25 01:29:20.814449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.518 [2024-07-25 01:29:20.814591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.518 [2024-07-25 01:29:20.814609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.518 [2024-07-25 01:29:20.814616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.518 [2024-07-25 01:29:20.814621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.518 [2024-07-25 01:29:20.814638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.518 qpair failed and we were unable to recover it. 00:28:58.518 [2024-07-25 01:29:20.824585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.518 [2024-07-25 01:29:20.824755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.518 [2024-07-25 01:29:20.824783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.518 [2024-07-25 01:29:20.824790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.518 [2024-07-25 01:29:20.824796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.518 [2024-07-25 01:29:20.824812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.518 qpair failed and we were unable to recover it. 00:28:58.518 [2024-07-25 01:29:20.834568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.518 [2024-07-25 01:29:20.834709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.518 [2024-07-25 01:29:20.834725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.518 [2024-07-25 01:29:20.834732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.518 [2024-07-25 01:29:20.834738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.518 [2024-07-25 01:29:20.834754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.518 qpair failed and we were unable to recover it. 00:28:58.518 [2024-07-25 01:29:20.844614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.518 [2024-07-25 01:29:20.844752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.518 [2024-07-25 01:29:20.844769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.518 [2024-07-25 01:29:20.844776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.518 [2024-07-25 01:29:20.844782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.518 [2024-07-25 01:29:20.844798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.518 qpair failed and we were unable to recover it. 00:28:58.518 [2024-07-25 01:29:20.854646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.518 [2024-07-25 01:29:20.854810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.518 [2024-07-25 01:29:20.854828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.518 [2024-07-25 01:29:20.854835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.518 [2024-07-25 01:29:20.854841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.518 [2024-07-25 01:29:20.854858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.518 qpair failed and we were unable to recover it. 00:28:58.518 [2024-07-25 01:29:20.864651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.518 [2024-07-25 01:29:20.864788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.518 [2024-07-25 01:29:20.864806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.518 [2024-07-25 01:29:20.864813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.518 [2024-07-25 01:29:20.864819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.518 [2024-07-25 01:29:20.864839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.518 qpair failed and we were unable to recover it. 00:28:58.518 [2024-07-25 01:29:20.874666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.518 [2024-07-25 01:29:20.874803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.518 [2024-07-25 01:29:20.874821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.518 [2024-07-25 01:29:20.874827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.518 [2024-07-25 01:29:20.874834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.518 [2024-07-25 01:29:20.874850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.518 qpair failed and we were unable to recover it. 00:28:58.518 [2024-07-25 01:29:20.884722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.518 [2024-07-25 01:29:20.884864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.518 [2024-07-25 01:29:20.884883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.518 [2024-07-25 01:29:20.884890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.518 [2024-07-25 01:29:20.884895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.519 [2024-07-25 01:29:20.884912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.519 qpair failed and we were unable to recover it. 00:28:58.519 [2024-07-25 01:29:20.894731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.519 [2024-07-25 01:29:20.894869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.519 [2024-07-25 01:29:20.894885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.519 [2024-07-25 01:29:20.894893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.519 [2024-07-25 01:29:20.894898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.519 [2024-07-25 01:29:20.894915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.519 qpair failed and we were unable to recover it. 00:28:58.519 [2024-07-25 01:29:20.904771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.519 [2024-07-25 01:29:20.904911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.519 [2024-07-25 01:29:20.904928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.519 [2024-07-25 01:29:20.904935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.519 [2024-07-25 01:29:20.904941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.519 [2024-07-25 01:29:20.904956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.519 qpair failed and we were unable to recover it. 00:28:58.519 [2024-07-25 01:29:20.914814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.519 [2024-07-25 01:29:20.914950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.519 [2024-07-25 01:29:20.914970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.519 [2024-07-25 01:29:20.914977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.519 [2024-07-25 01:29:20.914983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.519 [2024-07-25 01:29:20.914999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.519 qpair failed and we were unable to recover it. 00:28:58.519 [2024-07-25 01:29:20.924755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.519 [2024-07-25 01:29:20.924894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.519 [2024-07-25 01:29:20.924912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.519 [2024-07-25 01:29:20.924919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.519 [2024-07-25 01:29:20.924924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.519 [2024-07-25 01:29:20.924941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.519 qpair failed and we were unable to recover it. 00:28:58.519 [2024-07-25 01:29:20.934851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.519 [2024-07-25 01:29:20.935012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.519 [2024-07-25 01:29:20.935029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.519 [2024-07-25 01:29:20.935037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.519 [2024-07-25 01:29:20.935048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.519 [2024-07-25 01:29:20.935065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.519 qpair failed and we were unable to recover it. 00:28:58.519 [2024-07-25 01:29:20.944872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.519 [2024-07-25 01:29:20.945018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.519 [2024-07-25 01:29:20.945035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.519 [2024-07-25 01:29:20.945048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.519 [2024-07-25 01:29:20.945055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.519 [2024-07-25 01:29:20.945071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.519 qpair failed and we were unable to recover it. 00:28:58.519 [2024-07-25 01:29:20.954917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.519 [2024-07-25 01:29:20.955059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.519 [2024-07-25 01:29:20.955077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.519 [2024-07-25 01:29:20.955084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.519 [2024-07-25 01:29:20.955090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.519 [2024-07-25 01:29:20.955110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.519 qpair failed and we were unable to recover it. 00:28:58.519 [2024-07-25 01:29:20.964927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.519 [2024-07-25 01:29:20.965078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.519 [2024-07-25 01:29:20.965095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.519 [2024-07-25 01:29:20.965102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.519 [2024-07-25 01:29:20.965108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.519 [2024-07-25 01:29:20.965124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.519 qpair failed and we were unable to recover it. 00:28:58.519 [2024-07-25 01:29:20.974977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.519 [2024-07-25 01:29:20.975120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.519 [2024-07-25 01:29:20.975137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.519 [2024-07-25 01:29:20.975144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.519 [2024-07-25 01:29:20.975150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.519 [2024-07-25 01:29:20.975166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.519 qpair failed and we were unable to recover it. 00:28:58.519 [2024-07-25 01:29:20.984992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.519 [2024-07-25 01:29:20.985139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.519 [2024-07-25 01:29:20.985157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.519 [2024-07-25 01:29:20.985163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.519 [2024-07-25 01:29:20.985169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.519 [2024-07-25 01:29:20.985186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.519 qpair failed and we were unable to recover it. 00:28:58.519 [2024-07-25 01:29:20.995050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.519 [2024-07-25 01:29:20.995186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.519 [2024-07-25 01:29:20.995204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.519 [2024-07-25 01:29:20.995211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.519 [2024-07-25 01:29:20.995216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.519 [2024-07-25 01:29:20.995233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.519 qpair failed and we were unable to recover it. 00:28:58.519 [2024-07-25 01:29:21.005029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.519 [2024-07-25 01:29:21.005171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.519 [2024-07-25 01:29:21.005192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.519 [2024-07-25 01:29:21.005199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.519 [2024-07-25 01:29:21.005205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.519 [2024-07-25 01:29:21.005221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.519 qpair failed and we were unable to recover it. 00:28:58.780 [2024-07-25 01:29:21.015120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.780 [2024-07-25 01:29:21.015294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.780 [2024-07-25 01:29:21.015311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.780 [2024-07-25 01:29:21.015318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.780 [2024-07-25 01:29:21.015324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.780 [2024-07-25 01:29:21.015341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.780 qpair failed and we were unable to recover it. 00:28:58.780 [2024-07-25 01:29:21.025091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.780 [2024-07-25 01:29:21.025234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.780 [2024-07-25 01:29:21.025251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.780 [2024-07-25 01:29:21.025258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.780 [2024-07-25 01:29:21.025264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.780 [2024-07-25 01:29:21.025280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.780 qpair failed and we were unable to recover it. 00:28:58.780 [2024-07-25 01:29:21.035134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.780 [2024-07-25 01:29:21.035270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.780 [2024-07-25 01:29:21.035287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.780 [2024-07-25 01:29:21.035294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.780 [2024-07-25 01:29:21.035300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.780 [2024-07-25 01:29:21.035316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.780 qpair failed and we were unable to recover it. 00:28:58.780 [2024-07-25 01:29:21.045162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.780 [2024-07-25 01:29:21.045302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.780 [2024-07-25 01:29:21.045319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.780 [2024-07-25 01:29:21.045326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.780 [2024-07-25 01:29:21.045342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.780 [2024-07-25 01:29:21.045359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.780 qpair failed and we were unable to recover it. 00:28:58.780 [2024-07-25 01:29:21.055236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.780 [2024-07-25 01:29:21.055404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.780 [2024-07-25 01:29:21.055421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.780 [2024-07-25 01:29:21.055428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.780 [2024-07-25 01:29:21.055434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.780 [2024-07-25 01:29:21.055451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.780 qpair failed and we were unable to recover it. 00:28:58.780 [2024-07-25 01:29:21.065149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.780 [2024-07-25 01:29:21.065291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.780 [2024-07-25 01:29:21.065309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.780 [2024-07-25 01:29:21.065316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.780 [2024-07-25 01:29:21.065321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.781 [2024-07-25 01:29:21.065338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-07-25 01:29:21.075219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.781 [2024-07-25 01:29:21.075377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.781 [2024-07-25 01:29:21.075394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.781 [2024-07-25 01:29:21.075402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.781 [2024-07-25 01:29:21.075407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.781 [2024-07-25 01:29:21.075424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-07-25 01:29:21.085276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.781 [2024-07-25 01:29:21.085417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.781 [2024-07-25 01:29:21.085434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.781 [2024-07-25 01:29:21.085441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.781 [2024-07-25 01:29:21.085447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.781 [2024-07-25 01:29:21.085463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-07-25 01:29:21.095238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.781 [2024-07-25 01:29:21.095381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.781 [2024-07-25 01:29:21.095398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.781 [2024-07-25 01:29:21.095405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.781 [2024-07-25 01:29:21.095411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.781 [2024-07-25 01:29:21.095427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-07-25 01:29:21.105340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.781 [2024-07-25 01:29:21.105481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.781 [2024-07-25 01:29:21.105499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.781 [2024-07-25 01:29:21.105506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.781 [2024-07-25 01:29:21.105512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.781 [2024-07-25 01:29:21.105528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-07-25 01:29:21.115361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.781 [2024-07-25 01:29:21.115503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.781 [2024-07-25 01:29:21.115519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.781 [2024-07-25 01:29:21.115526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.781 [2024-07-25 01:29:21.115532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.781 [2024-07-25 01:29:21.115548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-07-25 01:29:21.125380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.781 [2024-07-25 01:29:21.125523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.781 [2024-07-25 01:29:21.125539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.781 [2024-07-25 01:29:21.125547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.781 [2024-07-25 01:29:21.125553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.781 [2024-07-25 01:29:21.125569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-07-25 01:29:21.135424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.781 [2024-07-25 01:29:21.135566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.781 [2024-07-25 01:29:21.135583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.781 [2024-07-25 01:29:21.135591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.781 [2024-07-25 01:29:21.135600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.781 [2024-07-25 01:29:21.135617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-07-25 01:29:21.145430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.781 [2024-07-25 01:29:21.145569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.781 [2024-07-25 01:29:21.145586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.781 [2024-07-25 01:29:21.145594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.781 [2024-07-25 01:29:21.145599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.781 [2024-07-25 01:29:21.145616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-07-25 01:29:21.155488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.781 [2024-07-25 01:29:21.155626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.781 [2024-07-25 01:29:21.155643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.781 [2024-07-25 01:29:21.155651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.781 [2024-07-25 01:29:21.155656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.781 [2024-07-25 01:29:21.155673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-07-25 01:29:21.165505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.781 [2024-07-25 01:29:21.165644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.781 [2024-07-25 01:29:21.165662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.781 [2024-07-25 01:29:21.165669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.781 [2024-07-25 01:29:21.165675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.781 [2024-07-25 01:29:21.165692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-07-25 01:29:21.175518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.781 [2024-07-25 01:29:21.175658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.781 [2024-07-25 01:29:21.175675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.781 [2024-07-25 01:29:21.175682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.781 [2024-07-25 01:29:21.175688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.781 [2024-07-25 01:29:21.175704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-07-25 01:29:21.185484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.781 [2024-07-25 01:29:21.185623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.781 [2024-07-25 01:29:21.185641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.781 [2024-07-25 01:29:21.185649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.781 [2024-07-25 01:29:21.185655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.781 [2024-07-25 01:29:21.185671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-07-25 01:29:21.195512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.781 [2024-07-25 01:29:21.195656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.781 [2024-07-25 01:29:21.195674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.781 [2024-07-25 01:29:21.195680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.781 [2024-07-25 01:29:21.195686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.781 [2024-07-25 01:29:21.195702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-07-25 01:29:21.205613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.782 [2024-07-25 01:29:21.205752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.782 [2024-07-25 01:29:21.205770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.782 [2024-07-25 01:29:21.205777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.782 [2024-07-25 01:29:21.205783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.782 [2024-07-25 01:29:21.205799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.782 qpair failed and we were unable to recover it. 00:28:58.782 [2024-07-25 01:29:21.215558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.782 [2024-07-25 01:29:21.215698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.782 [2024-07-25 01:29:21.215715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.782 [2024-07-25 01:29:21.215722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.782 [2024-07-25 01:29:21.215728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.782 [2024-07-25 01:29:21.215744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.782 qpair failed and we were unable to recover it. 00:28:58.782 [2024-07-25 01:29:21.225656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.782 [2024-07-25 01:29:21.225788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.782 [2024-07-25 01:29:21.225806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.782 [2024-07-25 01:29:21.225813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.782 [2024-07-25 01:29:21.225822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.782 [2024-07-25 01:29:21.225838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.782 qpair failed and we were unable to recover it. 00:28:58.782 [2024-07-25 01:29:21.235685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.782 [2024-07-25 01:29:21.235823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.782 [2024-07-25 01:29:21.235841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.782 [2024-07-25 01:29:21.235848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.782 [2024-07-25 01:29:21.235853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.782 [2024-07-25 01:29:21.235870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.782 qpair failed and we were unable to recover it. 00:28:58.782 [2024-07-25 01:29:21.245650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.782 [2024-07-25 01:29:21.245824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.782 [2024-07-25 01:29:21.245850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.782 [2024-07-25 01:29:21.245857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.782 [2024-07-25 01:29:21.245863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.782 [2024-07-25 01:29:21.245880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.782 qpair failed and we were unable to recover it. 00:28:58.782 [2024-07-25 01:29:21.255753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.782 [2024-07-25 01:29:21.255896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.782 [2024-07-25 01:29:21.255913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.782 [2024-07-25 01:29:21.255920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.782 [2024-07-25 01:29:21.255926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.782 [2024-07-25 01:29:21.255942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.782 qpair failed and we were unable to recover it. 00:28:58.782 [2024-07-25 01:29:21.265777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.782 [2024-07-25 01:29:21.265912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.782 [2024-07-25 01:29:21.265930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.782 [2024-07-25 01:29:21.265936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.782 [2024-07-25 01:29:21.265943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:58.782 [2024-07-25 01:29:21.265958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.782 qpair failed and we were unable to recover it. 00:28:59.043 [2024-07-25 01:29:21.275739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.043 [2024-07-25 01:29:21.275909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.043 [2024-07-25 01:29:21.275926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.043 [2024-07-25 01:29:21.275933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.043 [2024-07-25 01:29:21.275939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.043 [2024-07-25 01:29:21.275955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.043 qpair failed and we were unable to recover it. 00:28:59.043 [2024-07-25 01:29:21.285835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.043 [2024-07-25 01:29:21.285975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.043 [2024-07-25 01:29:21.285991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.043 [2024-07-25 01:29:21.285999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.043 [2024-07-25 01:29:21.286004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.043 [2024-07-25 01:29:21.286020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.043 qpair failed and we were unable to recover it. 00:28:59.043 [2024-07-25 01:29:21.295888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.043 [2024-07-25 01:29:21.296026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.043 [2024-07-25 01:29:21.296048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.043 [2024-07-25 01:29:21.296055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.043 [2024-07-25 01:29:21.296061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.043 [2024-07-25 01:29:21.296078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.043 qpair failed and we were unable to recover it. 00:28:59.043 [2024-07-25 01:29:21.305890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.043 [2024-07-25 01:29:21.306028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.043 [2024-07-25 01:29:21.306050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.043 [2024-07-25 01:29:21.306057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.043 [2024-07-25 01:29:21.306064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.043 [2024-07-25 01:29:21.306080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.043 qpair failed and we were unable to recover it. 00:28:59.043 [2024-07-25 01:29:21.315916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.043 [2024-07-25 01:29:21.316055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.043 [2024-07-25 01:29:21.316073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.043 [2024-07-25 01:29:21.316082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.043 [2024-07-25 01:29:21.316088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.043 [2024-07-25 01:29:21.316105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.043 qpair failed and we were unable to recover it. 00:28:59.043 [2024-07-25 01:29:21.325962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.043 [2024-07-25 01:29:21.326108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.043 [2024-07-25 01:29:21.326126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.043 [2024-07-25 01:29:21.326132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.043 [2024-07-25 01:29:21.326138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.043 [2024-07-25 01:29:21.326155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.043 qpair failed and we were unable to recover it. 00:28:59.043 [2024-07-25 01:29:21.335970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.043 [2024-07-25 01:29:21.336118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.043 [2024-07-25 01:29:21.336136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.043 [2024-07-25 01:29:21.336143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.043 [2024-07-25 01:29:21.336149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.043 [2024-07-25 01:29:21.336165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.043 qpair failed and we were unable to recover it. 00:28:59.043 [2024-07-25 01:29:21.346045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.043 [2024-07-25 01:29:21.346192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.043 [2024-07-25 01:29:21.346209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.043 [2024-07-25 01:29:21.346216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.043 [2024-07-25 01:29:21.346222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.043 [2024-07-25 01:29:21.346238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.043 qpair failed and we were unable to recover it. 00:28:59.043 [2024-07-25 01:29:21.356023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.043 [2024-07-25 01:29:21.356168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.043 [2024-07-25 01:29:21.356186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.043 [2024-07-25 01:29:21.356193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.043 [2024-07-25 01:29:21.356199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.043 [2024-07-25 01:29:21.356215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.043 qpair failed and we were unable to recover it. 00:28:59.043 [2024-07-25 01:29:21.366073] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.044 [2024-07-25 01:29:21.366221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.044 [2024-07-25 01:29:21.366238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.044 [2024-07-25 01:29:21.366245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.044 [2024-07-25 01:29:21.366251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.044 [2024-07-25 01:29:21.366268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.044 qpair failed and we were unable to recover it. 00:28:59.044 [2024-07-25 01:29:21.376110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.044 [2024-07-25 01:29:21.376275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.044 [2024-07-25 01:29:21.376293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.044 [2024-07-25 01:29:21.376300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.044 [2024-07-25 01:29:21.376306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.044 [2024-07-25 01:29:21.376324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.044 qpair failed and we were unable to recover it. 00:28:59.044 [2024-07-25 01:29:21.386117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.044 [2024-07-25 01:29:21.386258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.044 [2024-07-25 01:29:21.386276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.044 [2024-07-25 01:29:21.386283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.044 [2024-07-25 01:29:21.386289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.044 [2024-07-25 01:29:21.386305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.044 qpair failed and we were unable to recover it. 00:28:59.044 [2024-07-25 01:29:21.396148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.044 [2024-07-25 01:29:21.396297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.044 [2024-07-25 01:29:21.396314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.044 [2024-07-25 01:29:21.396321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.044 [2024-07-25 01:29:21.396326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.044 [2024-07-25 01:29:21.396343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.044 qpair failed and we were unable to recover it. 00:28:59.044 [2024-07-25 01:29:21.406161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.044 [2024-07-25 01:29:21.406301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.044 [2024-07-25 01:29:21.406318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.044 [2024-07-25 01:29:21.406328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.044 [2024-07-25 01:29:21.406334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.044 [2024-07-25 01:29:21.406350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.044 qpair failed and we were unable to recover it. 00:28:59.044 [2024-07-25 01:29:21.416189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.044 [2024-07-25 01:29:21.416334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.044 [2024-07-25 01:29:21.416352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.044 [2024-07-25 01:29:21.416359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.044 [2024-07-25 01:29:21.416364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.044 [2024-07-25 01:29:21.416381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.044 qpair failed and we were unable to recover it. 00:28:59.044 [2024-07-25 01:29:21.426222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.044 [2024-07-25 01:29:21.426355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.044 [2024-07-25 01:29:21.426372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.044 [2024-07-25 01:29:21.426380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.044 [2024-07-25 01:29:21.426385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.044 [2024-07-25 01:29:21.426401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.044 qpair failed and we were unable to recover it. 00:28:59.044 [2024-07-25 01:29:21.436464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.044 [2024-07-25 01:29:21.436601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.044 [2024-07-25 01:29:21.436620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.044 [2024-07-25 01:29:21.436626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.044 [2024-07-25 01:29:21.436632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.044 [2024-07-25 01:29:21.436649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.044 qpair failed and we were unable to recover it. 00:28:59.044 [2024-07-25 01:29:21.446441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.044 [2024-07-25 01:29:21.446579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.044 [2024-07-25 01:29:21.446596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.044 [2024-07-25 01:29:21.446603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.044 [2024-07-25 01:29:21.446609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.044 [2024-07-25 01:29:21.446626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.044 qpair failed and we were unable to recover it. 00:28:59.044 [2024-07-25 01:29:21.456323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.044 [2024-07-25 01:29:21.456462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.044 [2024-07-25 01:29:21.456480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.044 [2024-07-25 01:29:21.456487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.044 [2024-07-25 01:29:21.456493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.044 [2024-07-25 01:29:21.456510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.044 qpair failed and we were unable to recover it. 00:28:59.044 [2024-07-25 01:29:21.466321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.044 [2024-07-25 01:29:21.466458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.044 [2024-07-25 01:29:21.466476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.044 [2024-07-25 01:29:21.466482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.044 [2024-07-25 01:29:21.466489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.044 [2024-07-25 01:29:21.466504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.044 qpair failed and we were unable to recover it. 00:28:59.044 [2024-07-25 01:29:21.476305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.044 [2024-07-25 01:29:21.476440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.044 [2024-07-25 01:29:21.476457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.044 [2024-07-25 01:29:21.476464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.044 [2024-07-25 01:29:21.476470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.044 [2024-07-25 01:29:21.476486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.044 qpair failed and we were unable to recover it. 00:28:59.044 [2024-07-25 01:29:21.486438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.044 [2024-07-25 01:29:21.486577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.044 [2024-07-25 01:29:21.486594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.044 [2024-07-25 01:29:21.486601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.044 [2024-07-25 01:29:21.486607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.044 [2024-07-25 01:29:21.486623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.044 qpair failed and we were unable to recover it. 00:28:59.044 [2024-07-25 01:29:21.496425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.044 [2024-07-25 01:29:21.496568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.044 [2024-07-25 01:29:21.496588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.044 [2024-07-25 01:29:21.496595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.044 [2024-07-25 01:29:21.496600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.044 [2024-07-25 01:29:21.496616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.044 qpair failed and we were unable to recover it. 00:28:59.044 [2024-07-25 01:29:21.506465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.044 [2024-07-25 01:29:21.506602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.044 [2024-07-25 01:29:21.506619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.044 [2024-07-25 01:29:21.506626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.044 [2024-07-25 01:29:21.506632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.044 [2024-07-25 01:29:21.506648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.044 qpair failed and we were unable to recover it. 00:28:59.044 [2024-07-25 01:29:21.516614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.044 [2024-07-25 01:29:21.516753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.044 [2024-07-25 01:29:21.516770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.044 [2024-07-25 01:29:21.516777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.044 [2024-07-25 01:29:21.516783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.044 [2024-07-25 01:29:21.516798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.044 qpair failed and we were unable to recover it. 00:28:59.044 [2024-07-25 01:29:21.526516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.044 [2024-07-25 01:29:21.526656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.044 [2024-07-25 01:29:21.526673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.044 [2024-07-25 01:29:21.526680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.044 [2024-07-25 01:29:21.526685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.044 [2024-07-25 01:29:21.526701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.044 qpair failed and we were unable to recover it. 00:28:59.305 [2024-07-25 01:29:21.536536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.305 [2024-07-25 01:29:21.536673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.305 [2024-07-25 01:29:21.536691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.305 [2024-07-25 01:29:21.536698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.305 [2024-07-25 01:29:21.536703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.305 [2024-07-25 01:29:21.536720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.305 qpair failed and we were unable to recover it. 00:28:59.305 [2024-07-25 01:29:21.546496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.305 [2024-07-25 01:29:21.546639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.305 [2024-07-25 01:29:21.546656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.305 [2024-07-25 01:29:21.546663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.305 [2024-07-25 01:29:21.546669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.305 [2024-07-25 01:29:21.546685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.305 qpair failed and we were unable to recover it. 00:28:59.305 [2024-07-25 01:29:21.556577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.305 [2024-07-25 01:29:21.556713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.305 [2024-07-25 01:29:21.556732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.305 [2024-07-25 01:29:21.556739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.305 [2024-07-25 01:29:21.556745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.305 [2024-07-25 01:29:21.556762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.305 qpair failed and we were unable to recover it. 00:28:59.305 [2024-07-25 01:29:21.566603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.305 [2024-07-25 01:29:21.566753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.305 [2024-07-25 01:29:21.566771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.305 [2024-07-25 01:29:21.566778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.305 [2024-07-25 01:29:21.566783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.305 [2024-07-25 01:29:21.566799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.305 qpair failed and we were unable to recover it. 00:28:59.305 [2024-07-25 01:29:21.576643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.305 [2024-07-25 01:29:21.576785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.305 [2024-07-25 01:29:21.576802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.305 [2024-07-25 01:29:21.576809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.305 [2024-07-25 01:29:21.576814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.305 [2024-07-25 01:29:21.576831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.305 qpair failed and we were unable to recover it. 00:28:59.305 [2024-07-25 01:29:21.586670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.305 [2024-07-25 01:29:21.586807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.305 [2024-07-25 01:29:21.586828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.305 [2024-07-25 01:29:21.586835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.305 [2024-07-25 01:29:21.586841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.305 [2024-07-25 01:29:21.586857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.305 qpair failed and we were unable to recover it. 00:28:59.305 [2024-07-25 01:29:21.596696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.305 [2024-07-25 01:29:21.596859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.305 [2024-07-25 01:29:21.596876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.305 [2024-07-25 01:29:21.596883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.305 [2024-07-25 01:29:21.596889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.305 [2024-07-25 01:29:21.596905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.305 qpair failed and we were unable to recover it. 00:28:59.305 [2024-07-25 01:29:21.606739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.305 [2024-07-25 01:29:21.606882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.305 [2024-07-25 01:29:21.606898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.305 [2024-07-25 01:29:21.606905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.305 [2024-07-25 01:29:21.606911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.305 [2024-07-25 01:29:21.606927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.305 qpair failed and we were unable to recover it. 00:28:59.305 [2024-07-25 01:29:21.616753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.305 [2024-07-25 01:29:21.616896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.305 [2024-07-25 01:29:21.616912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.305 [2024-07-25 01:29:21.616919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.305 [2024-07-25 01:29:21.616925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.305 [2024-07-25 01:29:21.616942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.305 qpair failed and we were unable to recover it. 00:28:59.305 [2024-07-25 01:29:21.626770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.305 [2024-07-25 01:29:21.626904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.305 [2024-07-25 01:29:21.626921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.305 [2024-07-25 01:29:21.626928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.305 [2024-07-25 01:29:21.626934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.305 [2024-07-25 01:29:21.626953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.305 qpair failed and we were unable to recover it. 00:28:59.305 [2024-07-25 01:29:21.636818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.305 [2024-07-25 01:29:21.636953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.305 [2024-07-25 01:29:21.636972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.305 [2024-07-25 01:29:21.636980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.305 [2024-07-25 01:29:21.636986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.305 [2024-07-25 01:29:21.637002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.305 qpair failed and we were unable to recover it. 00:28:59.305 [2024-07-25 01:29:21.646875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.305 [2024-07-25 01:29:21.647014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.305 [2024-07-25 01:29:21.647031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.305 [2024-07-25 01:29:21.647038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.305 [2024-07-25 01:29:21.647050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.305 [2024-07-25 01:29:21.647067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.305 qpair failed and we were unable to recover it. 00:28:59.305 [2024-07-25 01:29:21.656884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.305 [2024-07-25 01:29:21.657024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.305 [2024-07-25 01:29:21.657041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.305 [2024-07-25 01:29:21.657056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.305 [2024-07-25 01:29:21.657062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.305 [2024-07-25 01:29:21.657078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.305 qpair failed and we were unable to recover it. 00:28:59.305 [2024-07-25 01:29:21.666926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.305 [2024-07-25 01:29:21.667090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.305 [2024-07-25 01:29:21.667107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.305 [2024-07-25 01:29:21.667114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.305 [2024-07-25 01:29:21.667120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.305 [2024-07-25 01:29:21.667137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.305 qpair failed and we were unable to recover it. 00:28:59.305 [2024-07-25 01:29:21.676929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.305 [2024-07-25 01:29:21.677074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.305 [2024-07-25 01:29:21.677095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.305 [2024-07-25 01:29:21.677102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.305 [2024-07-25 01:29:21.677107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.305 [2024-07-25 01:29:21.677124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.305 qpair failed and we were unable to recover it. 00:28:59.305 [2024-07-25 01:29:21.686977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.305 [2024-07-25 01:29:21.687123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.305 [2024-07-25 01:29:21.687140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.305 [2024-07-25 01:29:21.687147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.305 [2024-07-25 01:29:21.687153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.305 [2024-07-25 01:29:21.687169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.305 qpair failed and we were unable to recover it. 00:28:59.305 [2024-07-25 01:29:21.696986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.305 [2024-07-25 01:29:21.697130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.305 [2024-07-25 01:29:21.697147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.305 [2024-07-25 01:29:21.697154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.305 [2024-07-25 01:29:21.697160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.306 [2024-07-25 01:29:21.697177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.306 qpair failed and we were unable to recover it. 00:28:59.306 [2024-07-25 01:29:21.706935] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.306 [2024-07-25 01:29:21.707082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.306 [2024-07-25 01:29:21.707099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.306 [2024-07-25 01:29:21.707106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.306 [2024-07-25 01:29:21.707112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.306 [2024-07-25 01:29:21.707128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.306 qpair failed and we were unable to recover it. 00:28:59.306 [2024-07-25 01:29:21.717054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.306 [2024-07-25 01:29:21.717196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.306 [2024-07-25 01:29:21.717214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.306 [2024-07-25 01:29:21.717221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.306 [2024-07-25 01:29:21.717226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.306 [2024-07-25 01:29:21.717245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.306 qpair failed and we were unable to recover it. 00:28:59.306 [2024-07-25 01:29:21.727118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.306 [2024-07-25 01:29:21.727259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.306 [2024-07-25 01:29:21.727276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.306 [2024-07-25 01:29:21.727283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.306 [2024-07-25 01:29:21.727289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.306 [2024-07-25 01:29:21.727306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.306 qpair failed and we were unable to recover it. 00:28:59.306 [2024-07-25 01:29:21.737130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.306 [2024-07-25 01:29:21.737268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.306 [2024-07-25 01:29:21.737285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.306 [2024-07-25 01:29:21.737292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.306 [2024-07-25 01:29:21.737298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.306 [2024-07-25 01:29:21.737314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.306 qpair failed and we were unable to recover it. 00:28:59.306 [2024-07-25 01:29:21.747123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.306 [2024-07-25 01:29:21.747276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.306 [2024-07-25 01:29:21.747294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.306 [2024-07-25 01:29:21.747301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.306 [2024-07-25 01:29:21.747306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.306 [2024-07-25 01:29:21.747323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.306 qpair failed and we were unable to recover it. 00:28:59.306 [2024-07-25 01:29:21.757157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.306 [2024-07-25 01:29:21.757295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.306 [2024-07-25 01:29:21.757312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.306 [2024-07-25 01:29:21.757319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.306 [2024-07-25 01:29:21.757324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.306 [2024-07-25 01:29:21.757341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.306 qpair failed and we were unable to recover it. 00:28:59.306 [2024-07-25 01:29:21.767188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.306 [2024-07-25 01:29:21.767327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.306 [2024-07-25 01:29:21.767347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.306 [2024-07-25 01:29:21.767354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.306 [2024-07-25 01:29:21.767360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.306 [2024-07-25 01:29:21.767377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.306 qpair failed and we were unable to recover it. 00:28:59.306 [2024-07-25 01:29:21.777229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.306 [2024-07-25 01:29:21.777371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.306 [2024-07-25 01:29:21.777389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.306 [2024-07-25 01:29:21.777395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.306 [2024-07-25 01:29:21.777401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.306 [2024-07-25 01:29:21.777417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.306 qpair failed and we were unable to recover it. 00:28:59.306 [2024-07-25 01:29:21.787261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.306 [2024-07-25 01:29:21.787425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.306 [2024-07-25 01:29:21.787442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.306 [2024-07-25 01:29:21.787449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.306 [2024-07-25 01:29:21.787454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.306 [2024-07-25 01:29:21.787471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.306 qpair failed and we were unable to recover it. 00:28:59.566 [2024-07-25 01:29:21.797265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.566 [2024-07-25 01:29:21.797407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.566 [2024-07-25 01:29:21.797424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.566 [2024-07-25 01:29:21.797431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.566 [2024-07-25 01:29:21.797437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.566 [2024-07-25 01:29:21.797454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.566 qpair failed and we were unable to recover it. 00:28:59.566 [2024-07-25 01:29:21.807304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.566 [2024-07-25 01:29:21.807442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.566 [2024-07-25 01:29:21.807459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.566 [2024-07-25 01:29:21.807466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.566 [2024-07-25 01:29:21.807478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.566 [2024-07-25 01:29:21.807495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.566 qpair failed and we were unable to recover it. 00:28:59.566 [2024-07-25 01:29:21.817329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.566 [2024-07-25 01:29:21.817475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.566 [2024-07-25 01:29:21.817492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.566 [2024-07-25 01:29:21.817499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.566 [2024-07-25 01:29:21.817505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.566 [2024-07-25 01:29:21.817521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.566 qpair failed and we were unable to recover it. 00:28:59.566 [2024-07-25 01:29:21.827357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.566 [2024-07-25 01:29:21.827494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.566 [2024-07-25 01:29:21.827511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.566 [2024-07-25 01:29:21.827518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.566 [2024-07-25 01:29:21.827523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.566 [2024-07-25 01:29:21.827539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.566 qpair failed and we were unable to recover it. 00:28:59.566 [2024-07-25 01:29:21.837377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.566 [2024-07-25 01:29:21.837536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.566 [2024-07-25 01:29:21.837553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.566 [2024-07-25 01:29:21.837559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.566 [2024-07-25 01:29:21.837565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.566 [2024-07-25 01:29:21.837582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.566 qpair failed and we were unable to recover it. 00:28:59.566 [2024-07-25 01:29:21.847415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.566 [2024-07-25 01:29:21.847552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.566 [2024-07-25 01:29:21.847569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.566 [2024-07-25 01:29:21.847576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.566 [2024-07-25 01:29:21.847582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.566 [2024-07-25 01:29:21.847598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.566 qpair failed and we were unable to recover it. 00:28:59.566 [2024-07-25 01:29:21.857429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.566 [2024-07-25 01:29:21.857574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.566 [2024-07-25 01:29:21.857592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.566 [2024-07-25 01:29:21.857600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.566 [2024-07-25 01:29:21.857607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.567 [2024-07-25 01:29:21.857625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.567 qpair failed and we were unable to recover it. 00:28:59.567 [2024-07-25 01:29:21.867472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.567 [2024-07-25 01:29:21.867630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.567 [2024-07-25 01:29:21.867647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.567 [2024-07-25 01:29:21.867656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.567 [2024-07-25 01:29:21.867663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.567 [2024-07-25 01:29:21.867680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.567 qpair failed and we were unable to recover it. 00:28:59.567 [2024-07-25 01:29:21.877495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.567 [2024-07-25 01:29:21.877638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.567 [2024-07-25 01:29:21.877656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.567 [2024-07-25 01:29:21.877665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.567 [2024-07-25 01:29:21.877672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.567 [2024-07-25 01:29:21.877689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.567 qpair failed and we were unable to recover it. 00:28:59.567 [2024-07-25 01:29:21.887535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.567 [2024-07-25 01:29:21.887674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.567 [2024-07-25 01:29:21.887691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.567 [2024-07-25 01:29:21.887699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.567 [2024-07-25 01:29:21.887704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.567 [2024-07-25 01:29:21.887720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.567 qpair failed and we were unable to recover it. 00:28:59.567 [2024-07-25 01:29:21.897538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.567 [2024-07-25 01:29:21.897679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.567 [2024-07-25 01:29:21.897697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.567 [2024-07-25 01:29:21.897703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.567 [2024-07-25 01:29:21.897712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.567 [2024-07-25 01:29:21.897729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.567 qpair failed and we were unable to recover it. 00:28:59.567 [2024-07-25 01:29:21.907559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.567 [2024-07-25 01:29:21.907693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.567 [2024-07-25 01:29:21.907710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.567 [2024-07-25 01:29:21.907717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.567 [2024-07-25 01:29:21.907723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.567 [2024-07-25 01:29:21.907740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.567 qpair failed and we were unable to recover it. 00:28:59.567 [2024-07-25 01:29:21.917592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.567 [2024-07-25 01:29:21.917733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.567 [2024-07-25 01:29:21.917752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.567 [2024-07-25 01:29:21.917759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.567 [2024-07-25 01:29:21.917765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.567 [2024-07-25 01:29:21.917781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.567 qpair failed and we were unable to recover it. 00:28:59.567 [2024-07-25 01:29:21.927657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.567 [2024-07-25 01:29:21.927800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.567 [2024-07-25 01:29:21.927817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.567 [2024-07-25 01:29:21.927824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.567 [2024-07-25 01:29:21.927830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.567 [2024-07-25 01:29:21.927846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.567 qpair failed and we were unable to recover it. 00:28:59.567 [2024-07-25 01:29:21.937612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.567 [2024-07-25 01:29:21.937750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.567 [2024-07-25 01:29:21.937767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.567 [2024-07-25 01:29:21.937774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.567 [2024-07-25 01:29:21.937780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.567 [2024-07-25 01:29:21.937796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.567 qpair failed and we were unable to recover it. 00:28:59.567 [2024-07-25 01:29:21.947679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.567 [2024-07-25 01:29:21.947830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.567 [2024-07-25 01:29:21.947849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.567 [2024-07-25 01:29:21.947856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.567 [2024-07-25 01:29:21.947861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.567 [2024-07-25 01:29:21.947878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.567 qpair failed and we were unable to recover it. 00:28:59.567 [2024-07-25 01:29:21.957674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.567 [2024-07-25 01:29:21.957824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.567 [2024-07-25 01:29:21.957841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.567 [2024-07-25 01:29:21.957848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.567 [2024-07-25 01:29:21.957853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.567 [2024-07-25 01:29:21.957870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.567 qpair failed and we were unable to recover it. 00:28:59.567 [2024-07-25 01:29:21.967746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.567 [2024-07-25 01:29:21.967889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.567 [2024-07-25 01:29:21.967907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.567 [2024-07-25 01:29:21.967914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.567 [2024-07-25 01:29:21.967920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.567 [2024-07-25 01:29:21.967936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.567 qpair failed and we were unable to recover it. 00:28:59.567 [2024-07-25 01:29:21.977714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.567 [2024-07-25 01:29:21.977853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.567 [2024-07-25 01:29:21.977870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.567 [2024-07-25 01:29:21.977878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.567 [2024-07-25 01:29:21.977883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.567 [2024-07-25 01:29:21.977900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.567 qpair failed and we were unable to recover it. 00:28:59.567 [2024-07-25 01:29:21.987774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.567 [2024-07-25 01:29:21.987912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.567 [2024-07-25 01:29:21.987930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.567 [2024-07-25 01:29:21.987937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.567 [2024-07-25 01:29:21.987947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.567 [2024-07-25 01:29:21.987963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.567 qpair failed and we were unable to recover it. 00:28:59.568 [2024-07-25 01:29:21.997763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.568 [2024-07-25 01:29:21.997899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.568 [2024-07-25 01:29:21.997916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.568 [2024-07-25 01:29:21.997923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.568 [2024-07-25 01:29:21.997929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.568 [2024-07-25 01:29:21.997945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.568 qpair failed and we were unable to recover it. 00:28:59.568 [2024-07-25 01:29:22.007899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.568 [2024-07-25 01:29:22.008051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.568 [2024-07-25 01:29:22.008069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.568 [2024-07-25 01:29:22.008076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.568 [2024-07-25 01:29:22.008082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.568 [2024-07-25 01:29:22.008098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.568 qpair failed and we were unable to recover it. 00:28:59.568 [2024-07-25 01:29:22.017828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.568 [2024-07-25 01:29:22.017969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.568 [2024-07-25 01:29:22.017986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.568 [2024-07-25 01:29:22.017993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.568 [2024-07-25 01:29:22.017998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.568 [2024-07-25 01:29:22.018015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.568 qpair failed and we were unable to recover it. 00:28:59.568 [2024-07-25 01:29:22.027855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.568 [2024-07-25 01:29:22.027990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.568 [2024-07-25 01:29:22.028008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.568 [2024-07-25 01:29:22.028015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.568 [2024-07-25 01:29:22.028020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.568 [2024-07-25 01:29:22.028036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.568 qpair failed and we were unable to recover it. 00:28:59.568 [2024-07-25 01:29:22.037936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.568 [2024-07-25 01:29:22.038095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.568 [2024-07-25 01:29:22.038112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.568 [2024-07-25 01:29:22.038119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.568 [2024-07-25 01:29:22.038125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.568 [2024-07-25 01:29:22.038141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.568 qpair failed and we were unable to recover it. 00:28:59.568 [2024-07-25 01:29:22.047973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.568 [2024-07-25 01:29:22.048122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.568 [2024-07-25 01:29:22.048139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.568 [2024-07-25 01:29:22.048146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.568 [2024-07-25 01:29:22.048152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.568 [2024-07-25 01:29:22.048169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.568 qpair failed and we were unable to recover it. 00:28:59.827 [2024-07-25 01:29:22.058032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.828 [2024-07-25 01:29:22.058184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.828 [2024-07-25 01:29:22.058202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.828 [2024-07-25 01:29:22.058209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.828 [2024-07-25 01:29:22.058215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.828 [2024-07-25 01:29:22.058231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-07-25 01:29:22.067969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.828 [2024-07-25 01:29:22.068119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.828 [2024-07-25 01:29:22.068137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.828 [2024-07-25 01:29:22.068144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.828 [2024-07-25 01:29:22.068150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.828 [2024-07-25 01:29:22.068166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-07-25 01:29:22.077999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.828 [2024-07-25 01:29:22.078149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.828 [2024-07-25 01:29:22.078167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.828 [2024-07-25 01:29:22.078178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.828 [2024-07-25 01:29:22.078184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.828 [2024-07-25 01:29:22.078200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-07-25 01:29:22.088017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.828 [2024-07-25 01:29:22.088352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.828 [2024-07-25 01:29:22.088371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.828 [2024-07-25 01:29:22.088378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.828 [2024-07-25 01:29:22.088384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.828 [2024-07-25 01:29:22.088399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-07-25 01:29:22.098051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.828 [2024-07-25 01:29:22.098198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.828 [2024-07-25 01:29:22.098215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.828 [2024-07-25 01:29:22.098222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.828 [2024-07-25 01:29:22.098227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.828 [2024-07-25 01:29:22.098244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-07-25 01:29:22.108135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.828 [2024-07-25 01:29:22.108274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.828 [2024-07-25 01:29:22.108291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.828 [2024-07-25 01:29:22.108298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.828 [2024-07-25 01:29:22.108304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.828 [2024-07-25 01:29:22.108320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-07-25 01:29:22.118153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.828 [2024-07-25 01:29:22.118294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.828 [2024-07-25 01:29:22.118311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.828 [2024-07-25 01:29:22.118318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.828 [2024-07-25 01:29:22.118324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.828 [2024-07-25 01:29:22.118340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-07-25 01:29:22.128150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.828 [2024-07-25 01:29:22.128297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.828 [2024-07-25 01:29:22.128314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.828 [2024-07-25 01:29:22.128321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.828 [2024-07-25 01:29:22.128327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.828 [2024-07-25 01:29:22.128344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-07-25 01:29:22.138196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.828 [2024-07-25 01:29:22.138381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.828 [2024-07-25 01:29:22.138399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.828 [2024-07-25 01:29:22.138406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.828 [2024-07-25 01:29:22.138411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.828 [2024-07-25 01:29:22.138428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-07-25 01:29:22.148199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.828 [2024-07-25 01:29:22.148351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.828 [2024-07-25 01:29:22.148370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.828 [2024-07-25 01:29:22.148377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.828 [2024-07-25 01:29:22.148382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.828 [2024-07-25 01:29:22.148400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-07-25 01:29:22.158223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.828 [2024-07-25 01:29:22.158358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.828 [2024-07-25 01:29:22.158376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.828 [2024-07-25 01:29:22.158383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.828 [2024-07-25 01:29:22.158389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.828 [2024-07-25 01:29:22.158405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-07-25 01:29:22.168303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.828 [2024-07-25 01:29:22.168480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.828 [2024-07-25 01:29:22.168504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.828 [2024-07-25 01:29:22.168514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.828 [2024-07-25 01:29:22.168520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.828 [2024-07-25 01:29:22.168536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-07-25 01:29:22.178296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.828 [2024-07-25 01:29:22.178434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.828 [2024-07-25 01:29:22.178452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.828 [2024-07-25 01:29:22.178459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.828 [2024-07-25 01:29:22.178464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.828 [2024-07-25 01:29:22.178481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.829 [2024-07-25 01:29:22.188390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.829 [2024-07-25 01:29:22.188526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.829 [2024-07-25 01:29:22.188544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.829 [2024-07-25 01:29:22.188551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.829 [2024-07-25 01:29:22.188558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.829 [2024-07-25 01:29:22.188574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-07-25 01:29:22.198335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.829 [2024-07-25 01:29:22.198492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.829 [2024-07-25 01:29:22.198509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.829 [2024-07-25 01:29:22.198516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.829 [2024-07-25 01:29:22.198522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.829 [2024-07-25 01:29:22.198538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-07-25 01:29:22.208425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.829 [2024-07-25 01:29:22.208576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.829 [2024-07-25 01:29:22.208594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.829 [2024-07-25 01:29:22.208600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.829 [2024-07-25 01:29:22.208606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.829 [2024-07-25 01:29:22.208622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-07-25 01:29:22.218390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.829 [2024-07-25 01:29:22.218528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.829 [2024-07-25 01:29:22.218545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.829 [2024-07-25 01:29:22.218552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.829 [2024-07-25 01:29:22.218558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.829 [2024-07-25 01:29:22.218574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-07-25 01:29:22.228424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.829 [2024-07-25 01:29:22.228569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.829 [2024-07-25 01:29:22.228585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.829 [2024-07-25 01:29:22.228593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.829 [2024-07-25 01:29:22.228598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.829 [2024-07-25 01:29:22.228614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-07-25 01:29:22.238443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.829 [2024-07-25 01:29:22.238702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.829 [2024-07-25 01:29:22.238720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.829 [2024-07-25 01:29:22.238726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.829 [2024-07-25 01:29:22.238733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.829 [2024-07-25 01:29:22.238748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-07-25 01:29:22.248477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.829 [2024-07-25 01:29:22.248623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.829 [2024-07-25 01:29:22.248640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.829 [2024-07-25 01:29:22.248647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.829 [2024-07-25 01:29:22.248653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.829 [2024-07-25 01:29:22.248669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-07-25 01:29:22.258549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.829 [2024-07-25 01:29:22.258688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.829 [2024-07-25 01:29:22.258705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.829 [2024-07-25 01:29:22.258715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.829 [2024-07-25 01:29:22.258721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.829 [2024-07-25 01:29:22.258737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-07-25 01:29:22.268604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.829 [2024-07-25 01:29:22.268746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.829 [2024-07-25 01:29:22.268763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.829 [2024-07-25 01:29:22.268770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.829 [2024-07-25 01:29:22.268776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.829 [2024-07-25 01:29:22.268792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-07-25 01:29:22.278564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.829 [2024-07-25 01:29:22.278748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.829 [2024-07-25 01:29:22.278767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.829 [2024-07-25 01:29:22.278774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.829 [2024-07-25 01:29:22.278780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.829 [2024-07-25 01:29:22.278796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-07-25 01:29:22.288668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.829 [2024-07-25 01:29:22.288808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.829 [2024-07-25 01:29:22.288825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.829 [2024-07-25 01:29:22.288832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.829 [2024-07-25 01:29:22.288838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.829 [2024-07-25 01:29:22.288854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-07-25 01:29:22.298628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.829 [2024-07-25 01:29:22.298764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.829 [2024-07-25 01:29:22.298781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.829 [2024-07-25 01:29:22.298788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.829 [2024-07-25 01:29:22.298793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.829 [2024-07-25 01:29:22.298809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-07-25 01:29:22.308773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.829 [2024-07-25 01:29:22.308938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.829 [2024-07-25 01:29:22.308955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.829 [2024-07-25 01:29:22.308962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.829 [2024-07-25 01:29:22.308968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:28:59.829 [2024-07-25 01:29:22.308984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-07-25 01:29:22.318737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.090 [2024-07-25 01:29:22.318876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.090 [2024-07-25 01:29:22.318895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.090 [2024-07-25 01:29:22.318903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.090 [2024-07-25 01:29:22.318908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.090 [2024-07-25 01:29:22.318925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.090 qpair failed and we were unable to recover it. 00:29:00.090 [2024-07-25 01:29:22.328792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.090 [2024-07-25 01:29:22.328933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.090 [2024-07-25 01:29:22.328949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.090 [2024-07-25 01:29:22.328956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.090 [2024-07-25 01:29:22.328962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.090 [2024-07-25 01:29:22.328978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.090 qpair failed and we were unable to recover it. 00:29:00.090 [2024-07-25 01:29:22.338885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.090 [2024-07-25 01:29:22.339053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.090 [2024-07-25 01:29:22.339070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.090 [2024-07-25 01:29:22.339077] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.090 [2024-07-25 01:29:22.339082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.090 [2024-07-25 01:29:22.339099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.090 qpair failed and we were unable to recover it. 00:29:00.090 [2024-07-25 01:29:22.348886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.090 [2024-07-25 01:29:22.349025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.090 [2024-07-25 01:29:22.349051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.090 [2024-07-25 01:29:22.349059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.090 [2024-07-25 01:29:22.349064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.090 [2024-07-25 01:29:22.349080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.090 qpair failed and we were unable to recover it. 00:29:00.090 [2024-07-25 01:29:22.358908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.090 [2024-07-25 01:29:22.359053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.090 [2024-07-25 01:29:22.359071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.090 [2024-07-25 01:29:22.359078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.090 [2024-07-25 01:29:22.359083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.090 [2024-07-25 01:29:22.359100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.090 qpair failed and we were unable to recover it. 00:29:00.090 [2024-07-25 01:29:22.368914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.090 [2024-07-25 01:29:22.369064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.090 [2024-07-25 01:29:22.369081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.090 [2024-07-25 01:29:22.369088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.090 [2024-07-25 01:29:22.369094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.090 [2024-07-25 01:29:22.369110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.090 qpair failed and we were unable to recover it. 00:29:00.090 [2024-07-25 01:29:22.378898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.090 [2024-07-25 01:29:22.379039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.090 [2024-07-25 01:29:22.379062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.090 [2024-07-25 01:29:22.379069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.090 [2024-07-25 01:29:22.379074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.090 [2024-07-25 01:29:22.379091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.090 qpair failed and we were unable to recover it. 00:29:00.090 [2024-07-25 01:29:22.388920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.090 [2024-07-25 01:29:22.389107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.090 [2024-07-25 01:29:22.389125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.090 [2024-07-25 01:29:22.389132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.090 [2024-07-25 01:29:22.389138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.090 [2024-07-25 01:29:22.389158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.090 qpair failed and we were unable to recover it. 00:29:00.090 [2024-07-25 01:29:22.399025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.090 [2024-07-25 01:29:22.399185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.090 [2024-07-25 01:29:22.399202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.090 [2024-07-25 01:29:22.399209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.090 [2024-07-25 01:29:22.399215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.090 [2024-07-25 01:29:22.399232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.090 qpair failed and we were unable to recover it. 00:29:00.090 [2024-07-25 01:29:22.409012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.090 [2024-07-25 01:29:22.409155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.090 [2024-07-25 01:29:22.409172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.090 [2024-07-25 01:29:22.409179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.090 [2024-07-25 01:29:22.409184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.090 [2024-07-25 01:29:22.409201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.090 qpair failed and we were unable to recover it. 00:29:00.090 [2024-07-25 01:29:22.419086] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.090 [2024-07-25 01:29:22.419238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.090 [2024-07-25 01:29:22.419255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.090 [2024-07-25 01:29:22.419262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.090 [2024-07-25 01:29:22.419267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.090 [2024-07-25 01:29:22.419283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.090 qpair failed and we were unable to recover it. 00:29:00.090 [2024-07-25 01:29:22.429075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.091 [2024-07-25 01:29:22.429209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.091 [2024-07-25 01:29:22.429226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.091 [2024-07-25 01:29:22.429233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.091 [2024-07-25 01:29:22.429238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.091 [2024-07-25 01:29:22.429255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.091 qpair failed and we were unable to recover it. 00:29:00.091 [2024-07-25 01:29:22.439022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.091 [2024-07-25 01:29:22.439173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.091 [2024-07-25 01:29:22.439194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.091 [2024-07-25 01:29:22.439201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.091 [2024-07-25 01:29:22.439207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.091 [2024-07-25 01:29:22.439224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.091 qpair failed and we were unable to recover it. 00:29:00.091 [2024-07-25 01:29:22.449117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.091 [2024-07-25 01:29:22.449260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.091 [2024-07-25 01:29:22.449278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.091 [2024-07-25 01:29:22.449285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.091 [2024-07-25 01:29:22.449291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.091 [2024-07-25 01:29:22.449307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.091 qpair failed and we were unable to recover it. 00:29:00.091 [2024-07-25 01:29:22.459181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.091 [2024-07-25 01:29:22.459335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.091 [2024-07-25 01:29:22.459352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.091 [2024-07-25 01:29:22.459359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.091 [2024-07-25 01:29:22.459365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.091 [2024-07-25 01:29:22.459382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.091 qpair failed and we were unable to recover it. 00:29:00.091 [2024-07-25 01:29:22.469214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.091 [2024-07-25 01:29:22.469351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.091 [2024-07-25 01:29:22.469369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.091 [2024-07-25 01:29:22.469376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.091 [2024-07-25 01:29:22.469382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.091 [2024-07-25 01:29:22.469398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.091 qpair failed and we were unable to recover it. 00:29:00.091 [2024-07-25 01:29:22.479216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.091 [2024-07-25 01:29:22.479355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.091 [2024-07-25 01:29:22.479371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.091 [2024-07-25 01:29:22.479378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.091 [2024-07-25 01:29:22.479384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.091 [2024-07-25 01:29:22.479403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.091 qpair failed and we were unable to recover it. 00:29:00.091 [2024-07-25 01:29:22.489252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.091 [2024-07-25 01:29:22.489396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.091 [2024-07-25 01:29:22.489414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.091 [2024-07-25 01:29:22.489421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.091 [2024-07-25 01:29:22.489426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.091 [2024-07-25 01:29:22.489442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.091 qpair failed and we were unable to recover it. 00:29:00.091 [2024-07-25 01:29:22.499260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.091 [2024-07-25 01:29:22.499398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.091 [2024-07-25 01:29:22.499416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.091 [2024-07-25 01:29:22.499423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.091 [2024-07-25 01:29:22.499429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.091 [2024-07-25 01:29:22.499445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.091 qpair failed and we were unable to recover it. 00:29:00.091 [2024-07-25 01:29:22.509306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.091 [2024-07-25 01:29:22.509442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.091 [2024-07-25 01:29:22.509459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.091 [2024-07-25 01:29:22.509466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.091 [2024-07-25 01:29:22.509472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.091 [2024-07-25 01:29:22.509488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.091 qpair failed and we were unable to recover it. 00:29:00.091 [2024-07-25 01:29:22.519243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.091 [2024-07-25 01:29:22.519399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.091 [2024-07-25 01:29:22.519416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.091 [2024-07-25 01:29:22.519423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.091 [2024-07-25 01:29:22.519429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.091 [2024-07-25 01:29:22.519445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.091 qpair failed and we were unable to recover it. 00:29:00.091 [2024-07-25 01:29:22.529362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.091 [2024-07-25 01:29:22.529504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.091 [2024-07-25 01:29:22.529525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.091 [2024-07-25 01:29:22.529532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.091 [2024-07-25 01:29:22.529537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.091 [2024-07-25 01:29:22.529554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.091 qpair failed and we were unable to recover it. 00:29:00.091 [2024-07-25 01:29:22.539308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.091 [2024-07-25 01:29:22.539447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.091 [2024-07-25 01:29:22.539465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.091 [2024-07-25 01:29:22.539472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.091 [2024-07-25 01:29:22.539478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.091 [2024-07-25 01:29:22.539495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.091 qpair failed and we were unable to recover it. 00:29:00.091 [2024-07-25 01:29:22.549415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.091 [2024-07-25 01:29:22.549550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.091 [2024-07-25 01:29:22.549567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.091 [2024-07-25 01:29:22.549575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.091 [2024-07-25 01:29:22.549581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.091 [2024-07-25 01:29:22.549597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.091 qpair failed and we were unable to recover it. 00:29:00.091 [2024-07-25 01:29:22.559437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.091 [2024-07-25 01:29:22.559576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.091 [2024-07-25 01:29:22.559593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.091 [2024-07-25 01:29:22.559601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.092 [2024-07-25 01:29:22.559607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.092 [2024-07-25 01:29:22.559623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.092 qpair failed and we were unable to recover it. 00:29:00.092 [2024-07-25 01:29:22.569477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.092 [2024-07-25 01:29:22.569612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.092 [2024-07-25 01:29:22.569630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.092 [2024-07-25 01:29:22.569636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.092 [2024-07-25 01:29:22.569642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.092 [2024-07-25 01:29:22.569665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.092 qpair failed and we were unable to recover it. 00:29:00.092 [2024-07-25 01:29:22.579567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.092 [2024-07-25 01:29:22.579724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.092 [2024-07-25 01:29:22.579741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.092 [2024-07-25 01:29:22.579748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.092 [2024-07-25 01:29:22.579754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.092 [2024-07-25 01:29:22.579771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.092 qpair failed and we were unable to recover it. 00:29:00.353 [2024-07-25 01:29:22.589445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.353 [2024-07-25 01:29:22.589590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.353 [2024-07-25 01:29:22.589608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.353 [2024-07-25 01:29:22.589616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.353 [2024-07-25 01:29:22.589621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.353 [2024-07-25 01:29:22.589638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.353 qpair failed and we were unable to recover it. 00:29:00.353 [2024-07-25 01:29:22.599553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.353 [2024-07-25 01:29:22.599686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.353 [2024-07-25 01:29:22.599703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.353 [2024-07-25 01:29:22.599710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.353 [2024-07-25 01:29:22.599716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.353 [2024-07-25 01:29:22.599732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.353 qpair failed and we were unable to recover it. 00:29:00.353 [2024-07-25 01:29:22.609592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.353 [2024-07-25 01:29:22.609732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.353 [2024-07-25 01:29:22.609749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.353 [2024-07-25 01:29:22.609756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.353 [2024-07-25 01:29:22.609762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.353 [2024-07-25 01:29:22.609778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.353 qpair failed and we were unable to recover it. 00:29:00.353 [2024-07-25 01:29:22.619627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.353 [2024-07-25 01:29:22.619768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.353 [2024-07-25 01:29:22.619789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.353 [2024-07-25 01:29:22.619796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.353 [2024-07-25 01:29:22.619802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.353 [2024-07-25 01:29:22.619818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.353 qpair failed and we were unable to recover it. 00:29:00.353 [2024-07-25 01:29:22.629644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.353 [2024-07-25 01:29:22.629792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.353 [2024-07-25 01:29:22.629809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.353 [2024-07-25 01:29:22.629816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.353 [2024-07-25 01:29:22.629821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.353 [2024-07-25 01:29:22.629836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.353 qpair failed and we were unable to recover it. 00:29:00.353 [2024-07-25 01:29:22.639651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.353 [2024-07-25 01:29:22.639801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.353 [2024-07-25 01:29:22.639820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.353 [2024-07-25 01:29:22.639828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.353 [2024-07-25 01:29:22.639834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.353 [2024-07-25 01:29:22.639851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.353 qpair failed and we were unable to recover it. 00:29:00.353 [2024-07-25 01:29:22.649715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.353 [2024-07-25 01:29:22.649851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.353 [2024-07-25 01:29:22.649869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.353 [2024-07-25 01:29:22.649876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.353 [2024-07-25 01:29:22.649882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.353 [2024-07-25 01:29:22.649898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.353 qpair failed and we were unable to recover it. 00:29:00.353 [2024-07-25 01:29:22.659732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.353 [2024-07-25 01:29:22.659867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.353 [2024-07-25 01:29:22.659884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.353 [2024-07-25 01:29:22.659891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.353 [2024-07-25 01:29:22.659900] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.353 [2024-07-25 01:29:22.659917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.353 qpair failed and we were unable to recover it. 00:29:00.353 [2024-07-25 01:29:22.669700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.353 [2024-07-25 01:29:22.669875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.353 [2024-07-25 01:29:22.669900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.353 [2024-07-25 01:29:22.669907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.353 [2024-07-25 01:29:22.669913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.353 [2024-07-25 01:29:22.669929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.353 qpair failed and we were unable to recover it. 00:29:00.353 [2024-07-25 01:29:22.679792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.353 [2024-07-25 01:29:22.679925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.353 [2024-07-25 01:29:22.679943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.353 [2024-07-25 01:29:22.679950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.353 [2024-07-25 01:29:22.679955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.353 [2024-07-25 01:29:22.679972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.353 qpair failed and we were unable to recover it. 00:29:00.353 [2024-07-25 01:29:22.689759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.353 [2024-07-25 01:29:22.689910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.353 [2024-07-25 01:29:22.689928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.353 [2024-07-25 01:29:22.689935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.354 [2024-07-25 01:29:22.689940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.354 [2024-07-25 01:29:22.689956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.354 qpair failed and we were unable to recover it. 00:29:00.354 [2024-07-25 01:29:22.699834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.354 [2024-07-25 01:29:22.700010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.354 [2024-07-25 01:29:22.700036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.354 [2024-07-25 01:29:22.700048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.354 [2024-07-25 01:29:22.700055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.354 [2024-07-25 01:29:22.700072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.354 qpair failed and we were unable to recover it. 00:29:00.354 [2024-07-25 01:29:22.709886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.354 [2024-07-25 01:29:22.710031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.354 [2024-07-25 01:29:22.710053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.354 [2024-07-25 01:29:22.710060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.354 [2024-07-25 01:29:22.710065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.354 [2024-07-25 01:29:22.710082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.354 qpair failed and we were unable to recover it. 00:29:00.354 [2024-07-25 01:29:22.719922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.354 [2024-07-25 01:29:22.720068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.354 [2024-07-25 01:29:22.720085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.354 [2024-07-25 01:29:22.720092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.354 [2024-07-25 01:29:22.720097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.354 [2024-07-25 01:29:22.720114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.354 qpair failed and we were unable to recover it. 00:29:00.354 [2024-07-25 01:29:22.729957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.354 [2024-07-25 01:29:22.730100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.354 [2024-07-25 01:29:22.730117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.354 [2024-07-25 01:29:22.730124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.354 [2024-07-25 01:29:22.730129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.354 [2024-07-25 01:29:22.730146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.354 qpair failed and we were unable to recover it. 00:29:00.354 [2024-07-25 01:29:22.739972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.354 [2024-07-25 01:29:22.740117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.354 [2024-07-25 01:29:22.740134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.354 [2024-07-25 01:29:22.740141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.354 [2024-07-25 01:29:22.740146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.354 [2024-07-25 01:29:22.740163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.354 qpair failed and we were unable to recover it. 00:29:00.354 [2024-07-25 01:29:22.750060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.354 [2024-07-25 01:29:22.750214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.354 [2024-07-25 01:29:22.750232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.354 [2024-07-25 01:29:22.750239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.354 [2024-07-25 01:29:22.750249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.354 [2024-07-25 01:29:22.750265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.354 qpair failed and we were unable to recover it. 00:29:00.354 [2024-07-25 01:29:22.760037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.354 [2024-07-25 01:29:22.760178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.354 [2024-07-25 01:29:22.760196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.354 [2024-07-25 01:29:22.760203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.354 [2024-07-25 01:29:22.760209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.354 [2024-07-25 01:29:22.760225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.354 qpair failed and we were unable to recover it. 00:29:00.354 [2024-07-25 01:29:22.770080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.354 [2024-07-25 01:29:22.770218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.354 [2024-07-25 01:29:22.770235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.354 [2024-07-25 01:29:22.770242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.354 [2024-07-25 01:29:22.770248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.354 [2024-07-25 01:29:22.770264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.354 qpair failed and we were unable to recover it. 00:29:00.354 [2024-07-25 01:29:22.780013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.354 [2024-07-25 01:29:22.780149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.354 [2024-07-25 01:29:22.780167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.354 [2024-07-25 01:29:22.780174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.354 [2024-07-25 01:29:22.780179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.354 [2024-07-25 01:29:22.780195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.354 qpair failed and we were unable to recover it. 00:29:00.354 [2024-07-25 01:29:22.790114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.354 [2024-07-25 01:29:22.790254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.354 [2024-07-25 01:29:22.790271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.354 [2024-07-25 01:29:22.790278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.354 [2024-07-25 01:29:22.790283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.354 [2024-07-25 01:29:22.790300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.354 qpair failed and we were unable to recover it. 00:29:00.354 [2024-07-25 01:29:22.800076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.354 [2024-07-25 01:29:22.800250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.354 [2024-07-25 01:29:22.800267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.354 [2024-07-25 01:29:22.800274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.354 [2024-07-25 01:29:22.800281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.354 [2024-07-25 01:29:22.800298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.354 qpair failed and we were unable to recover it. 00:29:00.354 [2024-07-25 01:29:22.810183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.354 [2024-07-25 01:29:22.810331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.354 [2024-07-25 01:29:22.810348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.354 [2024-07-25 01:29:22.810355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.354 [2024-07-25 01:29:22.810361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.354 [2024-07-25 01:29:22.810378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.354 qpair failed and we were unable to recover it. 00:29:00.354 [2024-07-25 01:29:22.820197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.354 [2024-07-25 01:29:22.820334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.354 [2024-07-25 01:29:22.820351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.354 [2024-07-25 01:29:22.820358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.354 [2024-07-25 01:29:22.820364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.354 [2024-07-25 01:29:22.820380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.354 qpair failed and we were unable to recover it. 00:29:00.355 [2024-07-25 01:29:22.830232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.355 [2024-07-25 01:29:22.830368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.355 [2024-07-25 01:29:22.830385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.355 [2024-07-25 01:29:22.830392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.355 [2024-07-25 01:29:22.830398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.355 [2024-07-25 01:29:22.830414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.355 qpair failed and we were unable to recover it. 00:29:00.355 [2024-07-25 01:29:22.840183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.355 [2024-07-25 01:29:22.840322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.355 [2024-07-25 01:29:22.840339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.355 [2024-07-25 01:29:22.840349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.355 [2024-07-25 01:29:22.840355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.355 [2024-07-25 01:29:22.840371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.355 qpair failed and we were unable to recover it. 00:29:00.616 [2024-07-25 01:29:22.850321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.616 [2024-07-25 01:29:22.850467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.616 [2024-07-25 01:29:22.850485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.616 [2024-07-25 01:29:22.850492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.616 [2024-07-25 01:29:22.850498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.616 [2024-07-25 01:29:22.850514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.616 qpair failed and we were unable to recover it. 00:29:00.616 [2024-07-25 01:29:22.860316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.616 [2024-07-25 01:29:22.860456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.616 [2024-07-25 01:29:22.860474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.616 [2024-07-25 01:29:22.860481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.616 [2024-07-25 01:29:22.860486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.616 [2024-07-25 01:29:22.860502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.616 qpair failed and we were unable to recover it. 00:29:00.616 [2024-07-25 01:29:22.870349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.616 [2024-07-25 01:29:22.870491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.616 [2024-07-25 01:29:22.870508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.616 [2024-07-25 01:29:22.870515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.616 [2024-07-25 01:29:22.870521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.616 [2024-07-25 01:29:22.870537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.616 qpair failed and we were unable to recover it. 00:29:00.616 [2024-07-25 01:29:22.880348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.616 [2024-07-25 01:29:22.880489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.616 [2024-07-25 01:29:22.880507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.616 [2024-07-25 01:29:22.880514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.616 [2024-07-25 01:29:22.880519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1375ed0 00:29:00.616 [2024-07-25 01:29:22.880535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:00.616 qpair failed and we were unable to recover it. 00:29:00.616 Read completed with error (sct=0, sc=8) 00:29:00.616 starting I/O failed 00:29:00.616 Read completed with error (sct=0, sc=8) 00:29:00.616 starting I/O failed 00:29:00.616 Read completed with error (sct=0, sc=8) 00:29:00.616 starting I/O failed 00:29:00.616 Read completed with error (sct=0, sc=8) 00:29:00.616 starting I/O failed 00:29:00.616 Read completed with error (sct=0, sc=8) 00:29:00.616 starting I/O failed 00:29:00.616 Read completed with error (sct=0, sc=8) 00:29:00.616 starting I/O failed 00:29:00.616 Read completed with error (sct=0, sc=8) 00:29:00.616 starting I/O failed 00:29:00.617 Read completed with error (sct=0, sc=8) 00:29:00.617 starting I/O failed 00:29:00.617 Read completed with error (sct=0, sc=8) 00:29:00.617 starting I/O failed 00:29:00.617 Write completed with error (sct=0, sc=8) 00:29:00.617 starting I/O failed 00:29:00.617 Read completed with error (sct=0, sc=8) 00:29:00.617 starting I/O failed 00:29:00.617 Write completed with error (sct=0, sc=8) 00:29:00.617 starting I/O failed 00:29:00.617 Write completed with error (sct=0, sc=8) 00:29:00.617 starting I/O failed 00:29:00.617 Read completed with error (sct=0, sc=8) 00:29:00.617 starting I/O failed 00:29:00.617 Read completed with error (sct=0, sc=8) 00:29:00.617 starting I/O failed 00:29:00.617 Write completed with error (sct=0, sc=8) 00:29:00.617 starting I/O failed 00:29:00.617 Read completed with error (sct=0, sc=8) 00:29:00.617 starting I/O failed 00:29:00.617 Write completed with error (sct=0, sc=8) 00:29:00.617 starting I/O failed 00:29:00.617 Read completed with error (sct=0, sc=8) 00:29:00.617 starting I/O failed 00:29:00.617 Read completed with error (sct=0, sc=8) 00:29:00.617 starting I/O failed 00:29:00.617 Read completed with error (sct=0, sc=8) 00:29:00.617 starting I/O failed 00:29:00.617 Read completed with error (sct=0, sc=8) 00:29:00.617 starting I/O failed 00:29:00.617 Read completed with error (sct=0, sc=8) 00:29:00.617 starting I/O failed 00:29:00.617 Write completed with error (sct=0, sc=8) 00:29:00.617 starting I/O failed 00:29:00.617 Read completed with error (sct=0, sc=8) 00:29:00.617 starting I/O failed 00:29:00.617 Read completed with error (sct=0, sc=8) 00:29:00.617 starting I/O failed 00:29:00.617 Read completed with error (sct=0, sc=8) 00:29:00.617 starting I/O failed 00:29:00.617 Write completed with error (sct=0, sc=8) 00:29:00.617 starting I/O failed 00:29:00.617 Write completed with error (sct=0, sc=8) 00:29:00.617 starting I/O failed 00:29:00.617 Read completed with error (sct=0, sc=8) 00:29:00.617 starting I/O failed 00:29:00.617 Read completed with error (sct=0, sc=8) 00:29:00.617 starting I/O failed 00:29:00.617 Read completed with error (sct=0, sc=8) 00:29:00.617 starting I/O failed 00:29:00.617 [2024-07-25 01:29:22.880848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.617 [2024-07-25 01:29:22.890384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.617 [2024-07-25 01:29:22.890558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.617 [2024-07-25 01:29:22.890591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.617 [2024-07-25 01:29:22.890601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.617 [2024-07-25 01:29:22.890610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.617 [2024-07-25 01:29:22.890632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.617 qpair failed and we were unable to recover it. 00:29:00.617 [2024-07-25 01:29:22.900420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.617 [2024-07-25 01:29:22.900561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.617 [2024-07-25 01:29:22.900580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.617 [2024-07-25 01:29:22.900587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.617 [2024-07-25 01:29:22.900594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.617 [2024-07-25 01:29:22.900611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.617 qpair failed and we were unable to recover it. 00:29:00.617 [2024-07-25 01:29:22.910457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.617 [2024-07-25 01:29:22.910596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.617 [2024-07-25 01:29:22.910616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.617 [2024-07-25 01:29:22.910624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.617 [2024-07-25 01:29:22.910630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.617 [2024-07-25 01:29:22.910647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.617 qpair failed and we were unable to recover it. 00:29:00.617 [2024-07-25 01:29:22.920474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.617 [2024-07-25 01:29:22.920612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.617 [2024-07-25 01:29:22.920629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.617 [2024-07-25 01:29:22.920636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.617 [2024-07-25 01:29:22.920642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.617 [2024-07-25 01:29:22.920660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.617 qpair failed and we were unable to recover it. 00:29:00.617 [2024-07-25 01:29:22.930493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.617 [2024-07-25 01:29:22.930647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.617 [2024-07-25 01:29:22.930664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.617 [2024-07-25 01:29:22.930671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.617 [2024-07-25 01:29:22.930677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.617 [2024-07-25 01:29:22.930694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.617 qpair failed and we were unable to recover it. 00:29:00.617 [2024-07-25 01:29:22.940532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.617 [2024-07-25 01:29:22.940672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.617 [2024-07-25 01:29:22.940689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.617 [2024-07-25 01:29:22.940696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.617 [2024-07-25 01:29:22.940702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.617 [2024-07-25 01:29:22.940720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.617 qpair failed and we were unable to recover it. 00:29:00.617 [2024-07-25 01:29:22.950566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.617 [2024-07-25 01:29:22.950704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.617 [2024-07-25 01:29:22.950721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.617 [2024-07-25 01:29:22.950728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.617 [2024-07-25 01:29:22.950738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.617 [2024-07-25 01:29:22.950755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.617 qpair failed and we were unable to recover it. 00:29:00.617 [2024-07-25 01:29:22.960569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.617 [2024-07-25 01:29:22.960707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.617 [2024-07-25 01:29:22.960724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.617 [2024-07-25 01:29:22.960731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.617 [2024-07-25 01:29:22.960737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.617 [2024-07-25 01:29:22.960754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.617 qpair failed and we were unable to recover it. 00:29:00.617 [2024-07-25 01:29:22.970680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.617 [2024-07-25 01:29:22.970841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.617 [2024-07-25 01:29:22.970858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.617 [2024-07-25 01:29:22.970865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.617 [2024-07-25 01:29:22.970871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.617 [2024-07-25 01:29:22.970888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.617 qpair failed and we were unable to recover it. 00:29:00.617 [2024-07-25 01:29:22.980663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.617 [2024-07-25 01:29:22.980804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.617 [2024-07-25 01:29:22.980821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.617 [2024-07-25 01:29:22.980828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.617 [2024-07-25 01:29:22.980833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.617 [2024-07-25 01:29:22.980850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.618 qpair failed and we were unable to recover it. 00:29:00.618 [2024-07-25 01:29:22.990681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.618 [2024-07-25 01:29:22.990824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.618 [2024-07-25 01:29:22.990841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.618 [2024-07-25 01:29:22.990848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.618 [2024-07-25 01:29:22.990854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.618 [2024-07-25 01:29:22.990871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.618 qpair failed and we were unable to recover it. 00:29:00.618 [2024-07-25 01:29:23.000635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.618 [2024-07-25 01:29:23.000785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.618 [2024-07-25 01:29:23.000802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.618 [2024-07-25 01:29:23.000809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.618 [2024-07-25 01:29:23.000815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.618 [2024-07-25 01:29:23.000832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.618 qpair failed and we were unable to recover it. 00:29:00.618 [2024-07-25 01:29:23.010747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.618 [2024-07-25 01:29:23.010884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.618 [2024-07-25 01:29:23.010902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.618 [2024-07-25 01:29:23.010909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.618 [2024-07-25 01:29:23.010915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.618 [2024-07-25 01:29:23.010932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.618 qpair failed and we were unable to recover it. 00:29:00.618 [2024-07-25 01:29:23.020807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.618 [2024-07-25 01:29:23.020967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.618 [2024-07-25 01:29:23.020986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.618 [2024-07-25 01:29:23.020994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.618 [2024-07-25 01:29:23.021001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.618 [2024-07-25 01:29:23.021019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.618 qpair failed and we were unable to recover it. 00:29:00.618 [2024-07-25 01:29:23.030715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.618 [2024-07-25 01:29:23.030856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.618 [2024-07-25 01:29:23.030874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.618 [2024-07-25 01:29:23.030881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.618 [2024-07-25 01:29:23.030887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.618 [2024-07-25 01:29:23.030904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.618 qpair failed and we were unable to recover it. 00:29:00.618 [2024-07-25 01:29:23.040803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.618 [2024-07-25 01:29:23.040934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.618 [2024-07-25 01:29:23.040951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.618 [2024-07-25 01:29:23.040962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.618 [2024-07-25 01:29:23.040968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.618 [2024-07-25 01:29:23.040985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.618 qpair failed and we were unable to recover it. 00:29:00.618 [2024-07-25 01:29:23.050866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.618 [2024-07-25 01:29:23.051007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.618 [2024-07-25 01:29:23.051024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.618 [2024-07-25 01:29:23.051032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.618 [2024-07-25 01:29:23.051038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.618 [2024-07-25 01:29:23.051063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.618 qpair failed and we were unable to recover it. 00:29:00.618 [2024-07-25 01:29:23.060886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.618 [2024-07-25 01:29:23.061032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.618 [2024-07-25 01:29:23.061055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.618 [2024-07-25 01:29:23.061063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.618 [2024-07-25 01:29:23.061069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.618 [2024-07-25 01:29:23.061087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.618 qpair failed and we were unable to recover it. 00:29:00.618 [2024-07-25 01:29:23.070910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.618 [2024-07-25 01:29:23.071056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.618 [2024-07-25 01:29:23.071074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.618 [2024-07-25 01:29:23.071082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.618 [2024-07-25 01:29:23.071088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.618 [2024-07-25 01:29:23.071106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.618 qpair failed and we were unable to recover it. 00:29:00.618 [2024-07-25 01:29:23.080965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.618 [2024-07-25 01:29:23.081110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.618 [2024-07-25 01:29:23.081128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.618 [2024-07-25 01:29:23.081135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.618 [2024-07-25 01:29:23.081141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.618 [2024-07-25 01:29:23.081158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.618 qpair failed and we were unable to recover it. 00:29:00.618 [2024-07-25 01:29:23.090975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.618 [2024-07-25 01:29:23.091143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.618 [2024-07-25 01:29:23.091161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.618 [2024-07-25 01:29:23.091169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.618 [2024-07-25 01:29:23.091175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.618 [2024-07-25 01:29:23.091192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.618 qpair failed and we were unable to recover it. 00:29:00.618 [2024-07-25 01:29:23.101000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.618 [2024-07-25 01:29:23.101147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.618 [2024-07-25 01:29:23.101165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.618 [2024-07-25 01:29:23.101173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.618 [2024-07-25 01:29:23.101179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.618 [2024-07-25 01:29:23.101196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.618 qpair failed and we were unable to recover it. 00:29:00.880 [2024-07-25 01:29:23.111022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.880 [2024-07-25 01:29:23.111166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.880 [2024-07-25 01:29:23.111183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.880 [2024-07-25 01:29:23.111191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.880 [2024-07-25 01:29:23.111196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.880 [2024-07-25 01:29:23.111214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.880 qpair failed and we were unable to recover it. 00:29:00.880 [2024-07-25 01:29:23.121058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.880 [2024-07-25 01:29:23.121196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.880 [2024-07-25 01:29:23.121213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.880 [2024-07-25 01:29:23.121221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.880 [2024-07-25 01:29:23.121228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.880 [2024-07-25 01:29:23.121244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.880 qpair failed and we were unable to recover it. 00:29:00.880 [2024-07-25 01:29:23.131088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.880 [2024-07-25 01:29:23.131227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.880 [2024-07-25 01:29:23.131244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.880 [2024-07-25 01:29:23.131255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.880 [2024-07-25 01:29:23.131261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.880 [2024-07-25 01:29:23.131278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.880 qpair failed and we were unable to recover it. 00:29:00.880 [2024-07-25 01:29:23.141105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.880 [2024-07-25 01:29:23.141250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.880 [2024-07-25 01:29:23.141269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.880 [2024-07-25 01:29:23.141277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.880 [2024-07-25 01:29:23.141283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.880 [2024-07-25 01:29:23.141301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.880 qpair failed and we were unable to recover it. 00:29:00.880 [2024-07-25 01:29:23.151201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.880 [2024-07-25 01:29:23.151337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.880 [2024-07-25 01:29:23.151354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.880 [2024-07-25 01:29:23.151361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.880 [2024-07-25 01:29:23.151367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.880 [2024-07-25 01:29:23.151384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.881 qpair failed and we were unable to recover it. 00:29:00.881 [2024-07-25 01:29:23.161168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.881 [2024-07-25 01:29:23.161308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.881 [2024-07-25 01:29:23.161325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.881 [2024-07-25 01:29:23.161333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.881 [2024-07-25 01:29:23.161339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.881 [2024-07-25 01:29:23.161356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.881 qpair failed and we were unable to recover it. 00:29:00.881 [2024-07-25 01:29:23.171200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.881 [2024-07-25 01:29:23.171339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.881 [2024-07-25 01:29:23.171356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.881 [2024-07-25 01:29:23.171364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.881 [2024-07-25 01:29:23.171371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.881 [2024-07-25 01:29:23.171388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.881 qpair failed and we were unable to recover it. 00:29:00.881 [2024-07-25 01:29:23.181233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.881 [2024-07-25 01:29:23.181371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.881 [2024-07-25 01:29:23.181388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.881 [2024-07-25 01:29:23.181396] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.881 [2024-07-25 01:29:23.181402] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.881 [2024-07-25 01:29:23.181419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.881 qpair failed and we were unable to recover it. 00:29:00.881 [2024-07-25 01:29:23.191262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.881 [2024-07-25 01:29:23.191418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.881 [2024-07-25 01:29:23.191436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.881 [2024-07-25 01:29:23.191444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.881 [2024-07-25 01:29:23.191451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.881 [2024-07-25 01:29:23.191468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.881 qpair failed and we were unable to recover it. 00:29:00.881 [2024-07-25 01:29:23.201275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.881 [2024-07-25 01:29:23.201409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.881 [2024-07-25 01:29:23.201426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.881 [2024-07-25 01:29:23.201434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.881 [2024-07-25 01:29:23.201440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.881 [2024-07-25 01:29:23.201456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.881 qpair failed and we were unable to recover it. 00:29:00.881 [2024-07-25 01:29:23.211299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.881 [2024-07-25 01:29:23.211442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.881 [2024-07-25 01:29:23.211458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.881 [2024-07-25 01:29:23.211466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.881 [2024-07-25 01:29:23.211472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.881 [2024-07-25 01:29:23.211489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.881 qpair failed and we were unable to recover it. 00:29:00.881 [2024-07-25 01:29:23.221339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.881 [2024-07-25 01:29:23.221480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.881 [2024-07-25 01:29:23.221500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.881 [2024-07-25 01:29:23.221508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.881 [2024-07-25 01:29:23.221514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.881 [2024-07-25 01:29:23.221532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.881 qpair failed and we were unable to recover it. 00:29:00.881 [2024-07-25 01:29:23.231335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.881 [2024-07-25 01:29:23.231470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.881 [2024-07-25 01:29:23.231487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.881 [2024-07-25 01:29:23.231495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.881 [2024-07-25 01:29:23.231502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.881 [2024-07-25 01:29:23.231518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.881 qpair failed and we were unable to recover it. 00:29:00.881 [2024-07-25 01:29:23.241377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.881 [2024-07-25 01:29:23.241516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.881 [2024-07-25 01:29:23.241533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.881 [2024-07-25 01:29:23.241541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.881 [2024-07-25 01:29:23.241547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.882 [2024-07-25 01:29:23.241564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.882 qpair failed and we were unable to recover it. 00:29:00.882 [2024-07-25 01:29:23.251432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.882 [2024-07-25 01:29:23.251588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.882 [2024-07-25 01:29:23.251605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.882 [2024-07-25 01:29:23.251612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.882 [2024-07-25 01:29:23.251619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.882 [2024-07-25 01:29:23.251636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.882 qpair failed and we were unable to recover it. 00:29:00.882 [2024-07-25 01:29:23.261424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.882 [2024-07-25 01:29:23.261565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.882 [2024-07-25 01:29:23.261582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.882 [2024-07-25 01:29:23.261590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.882 [2024-07-25 01:29:23.261596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.882 [2024-07-25 01:29:23.261616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.882 qpair failed and we were unable to recover it. 00:29:00.882 [2024-07-25 01:29:23.271431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.882 [2024-07-25 01:29:23.271572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.882 [2024-07-25 01:29:23.271591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.882 [2024-07-25 01:29:23.271599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.882 [2024-07-25 01:29:23.271606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.882 [2024-07-25 01:29:23.271623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.882 qpair failed and we were unable to recover it. 00:29:00.882 [2024-07-25 01:29:23.281472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.882 [2024-07-25 01:29:23.281608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.882 [2024-07-25 01:29:23.281627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.882 [2024-07-25 01:29:23.281634] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.882 [2024-07-25 01:29:23.281641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.882 [2024-07-25 01:29:23.281658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.882 qpair failed and we were unable to recover it. 00:29:00.882 [2024-07-25 01:29:23.291500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.882 [2024-07-25 01:29:23.291651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.882 [2024-07-25 01:29:23.291670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.882 [2024-07-25 01:29:23.291679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.882 [2024-07-25 01:29:23.291685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.882 [2024-07-25 01:29:23.291703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.882 qpair failed and we were unable to recover it. 00:29:00.882 [2024-07-25 01:29:23.301538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.882 [2024-07-25 01:29:23.301682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.882 [2024-07-25 01:29:23.301701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.882 [2024-07-25 01:29:23.301709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.882 [2024-07-25 01:29:23.301716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.882 [2024-07-25 01:29:23.301732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.882 qpair failed and we were unable to recover it. 00:29:00.882 [2024-07-25 01:29:23.311498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.882 [2024-07-25 01:29:23.311632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.882 [2024-07-25 01:29:23.311654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.882 [2024-07-25 01:29:23.311662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.882 [2024-07-25 01:29:23.311668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.882 [2024-07-25 01:29:23.311685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.882 qpair failed and we were unable to recover it. 00:29:00.882 [2024-07-25 01:29:23.321622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.882 [2024-07-25 01:29:23.321757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.882 [2024-07-25 01:29:23.321774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.882 [2024-07-25 01:29:23.321782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.882 [2024-07-25 01:29:23.321788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.882 [2024-07-25 01:29:23.321806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.882 qpair failed and we were unable to recover it. 00:29:00.882 [2024-07-25 01:29:23.331853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.882 [2024-07-25 01:29:23.331997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.882 [2024-07-25 01:29:23.332014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.882 [2024-07-25 01:29:23.332022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.882 [2024-07-25 01:29:23.332028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.883 [2024-07-25 01:29:23.332050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.883 qpair failed and we were unable to recover it. 00:29:00.883 [2024-07-25 01:29:23.341583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.883 [2024-07-25 01:29:23.341721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.883 [2024-07-25 01:29:23.341739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.883 [2024-07-25 01:29:23.341747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.883 [2024-07-25 01:29:23.341753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.883 [2024-07-25 01:29:23.341771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.883 qpair failed and we were unable to recover it. 00:29:00.883 [2024-07-25 01:29:23.351663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.883 [2024-07-25 01:29:23.351849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.883 [2024-07-25 01:29:23.351868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.883 [2024-07-25 01:29:23.351876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.883 [2024-07-25 01:29:23.351886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.883 [2024-07-25 01:29:23.351903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.883 qpair failed and we were unable to recover it. 00:29:00.883 [2024-07-25 01:29:23.361635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.883 [2024-07-25 01:29:23.361785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.883 [2024-07-25 01:29:23.361802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.883 [2024-07-25 01:29:23.361810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.883 [2024-07-25 01:29:23.361816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:00.883 [2024-07-25 01:29:23.361833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.883 qpair failed and we were unable to recover it. 00:29:01.143 [2024-07-25 01:29:23.371770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.143 [2024-07-25 01:29:23.371912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.143 [2024-07-25 01:29:23.371929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.143 [2024-07-25 01:29:23.371937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.143 [2024-07-25 01:29:23.371944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.143 [2024-07-25 01:29:23.371961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.143 qpair failed and we were unable to recover it. 00:29:01.143 [2024-07-25 01:29:23.381778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.143 [2024-07-25 01:29:23.381916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.143 [2024-07-25 01:29:23.381934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.143 [2024-07-25 01:29:23.381943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.143 [2024-07-25 01:29:23.381951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.143 [2024-07-25 01:29:23.381971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.143 qpair failed and we were unable to recover it. 00:29:01.143 [2024-07-25 01:29:23.391792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.143 [2024-07-25 01:29:23.391930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.143 [2024-07-25 01:29:23.391948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.143 [2024-07-25 01:29:23.391955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.143 [2024-07-25 01:29:23.391962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.143 [2024-07-25 01:29:23.391979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.143 qpair failed and we were unable to recover it. 00:29:01.143 [2024-07-25 01:29:23.401852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.143 [2024-07-25 01:29:23.401996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.143 [2024-07-25 01:29:23.402016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.143 [2024-07-25 01:29:23.402024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.143 [2024-07-25 01:29:23.402030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.143 [2024-07-25 01:29:23.402055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.143 qpair failed and we were unable to recover it. 00:29:01.143 [2024-07-25 01:29:23.411870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.143 [2024-07-25 01:29:23.412013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.143 [2024-07-25 01:29:23.412031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.143 [2024-07-25 01:29:23.412039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.143 [2024-07-25 01:29:23.412051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.143 [2024-07-25 01:29:23.412069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.143 qpair failed and we were unable to recover it. 00:29:01.143 [2024-07-25 01:29:23.421892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.143 [2024-07-25 01:29:23.422038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.143 [2024-07-25 01:29:23.422061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.143 [2024-07-25 01:29:23.422068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.143 [2024-07-25 01:29:23.422075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.143 [2024-07-25 01:29:23.422092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.143 qpair failed and we were unable to recover it. 00:29:01.143 [2024-07-25 01:29:23.432163] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.143 [2024-07-25 01:29:23.432305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.143 [2024-07-25 01:29:23.432322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.143 [2024-07-25 01:29:23.432329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.143 [2024-07-25 01:29:23.432335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.143 [2024-07-25 01:29:23.432352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.143 qpair failed and we were unable to recover it. 00:29:01.143 [2024-07-25 01:29:23.441870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.143 [2024-07-25 01:29:23.442004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.143 [2024-07-25 01:29:23.442023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.143 [2024-07-25 01:29:23.442032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.143 [2024-07-25 01:29:23.442050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.143 [2024-07-25 01:29:23.442068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.143 qpair failed and we were unable to recover it. 00:29:01.143 [2024-07-25 01:29:23.451989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.143 [2024-07-25 01:29:23.452132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.143 [2024-07-25 01:29:23.452149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.143 [2024-07-25 01:29:23.452156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.143 [2024-07-25 01:29:23.452162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.143 [2024-07-25 01:29:23.452180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.143 qpair failed and we were unable to recover it. 00:29:01.143 [2024-07-25 01:29:23.461999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.143 [2024-07-25 01:29:23.462146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.143 [2024-07-25 01:29:23.462164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.143 [2024-07-25 01:29:23.462171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.143 [2024-07-25 01:29:23.462177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.143 [2024-07-25 01:29:23.462194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.143 qpair failed and we were unable to recover it. 00:29:01.143 [2024-07-25 01:29:23.471953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.143 [2024-07-25 01:29:23.472109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.143 [2024-07-25 01:29:23.472126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.143 [2024-07-25 01:29:23.472134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.143 [2024-07-25 01:29:23.472140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.143 [2024-07-25 01:29:23.472156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.143 qpair failed and we were unable to recover it. 00:29:01.143 [2024-07-25 01:29:23.481999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.143 [2024-07-25 01:29:23.482145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.143 [2024-07-25 01:29:23.482164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.143 [2024-07-25 01:29:23.482172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.143 [2024-07-25 01:29:23.482178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.143 [2024-07-25 01:29:23.482194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.143 qpair failed and we were unable to recover it. 00:29:01.143 [2024-07-25 01:29:23.492024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.143 [2024-07-25 01:29:23.492174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.143 [2024-07-25 01:29:23.492191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.143 [2024-07-25 01:29:23.492199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.143 [2024-07-25 01:29:23.492205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.143 [2024-07-25 01:29:23.492222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.143 qpair failed and we were unable to recover it. 00:29:01.143 [2024-07-25 01:29:23.502124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.143 [2024-07-25 01:29:23.502266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.143 [2024-07-25 01:29:23.502283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.143 [2024-07-25 01:29:23.502291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.143 [2024-07-25 01:29:23.502297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.143 [2024-07-25 01:29:23.502313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.143 qpair failed and we were unable to recover it. 00:29:01.143 [2024-07-25 01:29:23.512088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.143 [2024-07-25 01:29:23.512274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.143 [2024-07-25 01:29:23.512290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.143 [2024-07-25 01:29:23.512298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.143 [2024-07-25 01:29:23.512305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.143 [2024-07-25 01:29:23.512321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.143 qpair failed and we were unable to recover it. 00:29:01.143 [2024-07-25 01:29:23.522215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.143 [2024-07-25 01:29:23.522351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.143 [2024-07-25 01:29:23.522368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.143 [2024-07-25 01:29:23.522376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.143 [2024-07-25 01:29:23.522382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.143 [2024-07-25 01:29:23.522398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.143 qpair failed and we were unable to recover it. 00:29:01.143 [2024-07-25 01:29:23.532145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.143 [2024-07-25 01:29:23.532285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.143 [2024-07-25 01:29:23.532302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.143 [2024-07-25 01:29:23.532313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.143 [2024-07-25 01:29:23.532319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.143 [2024-07-25 01:29:23.532336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.143 qpair failed and we were unable to recover it. 00:29:01.143 [2024-07-25 01:29:23.542220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.143 [2024-07-25 01:29:23.542375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.143 [2024-07-25 01:29:23.542392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.143 [2024-07-25 01:29:23.542400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.143 [2024-07-25 01:29:23.542406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.143 [2024-07-25 01:29:23.542423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.143 qpair failed and we were unable to recover it. 00:29:01.143 [2024-07-25 01:29:23.552281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.143 [2024-07-25 01:29:23.552623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.143 [2024-07-25 01:29:23.552641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.143 [2024-07-25 01:29:23.552648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.144 [2024-07-25 01:29:23.552655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.144 [2024-07-25 01:29:23.552671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.144 qpair failed and we were unable to recover it. 00:29:01.144 [2024-07-25 01:29:23.562239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.144 [2024-07-25 01:29:23.562377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.144 [2024-07-25 01:29:23.562394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.144 [2024-07-25 01:29:23.562401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.144 [2024-07-25 01:29:23.562408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.144 [2024-07-25 01:29:23.562424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.144 qpair failed and we were unable to recover it. 00:29:01.144 [2024-07-25 01:29:23.572259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.144 [2024-07-25 01:29:23.572400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.144 [2024-07-25 01:29:23.572417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.144 [2024-07-25 01:29:23.572424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.144 [2024-07-25 01:29:23.572430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.144 [2024-07-25 01:29:23.572446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.144 qpair failed and we were unable to recover it. 00:29:01.144 [2024-07-25 01:29:23.582287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.144 [2024-07-25 01:29:23.582625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.144 [2024-07-25 01:29:23.582641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.144 [2024-07-25 01:29:23.582649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.144 [2024-07-25 01:29:23.582655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.144 [2024-07-25 01:29:23.582671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.144 qpair failed and we were unable to recover it. 00:29:01.144 [2024-07-25 01:29:23.592364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.144 [2024-07-25 01:29:23.592507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.144 [2024-07-25 01:29:23.592524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.144 [2024-07-25 01:29:23.592532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.144 [2024-07-25 01:29:23.592538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.144 [2024-07-25 01:29:23.592554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.144 qpair failed and we were unable to recover it. 00:29:01.144 [2024-07-25 01:29:23.602339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.144 [2024-07-25 01:29:23.602476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.144 [2024-07-25 01:29:23.602493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.144 [2024-07-25 01:29:23.602500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.144 [2024-07-25 01:29:23.602507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.144 [2024-07-25 01:29:23.602524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.144 qpair failed and we were unable to recover it. 00:29:01.144 [2024-07-25 01:29:23.612382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.144 [2024-07-25 01:29:23.612523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.144 [2024-07-25 01:29:23.612540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.144 [2024-07-25 01:29:23.612548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.144 [2024-07-25 01:29:23.612554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.144 [2024-07-25 01:29:23.612571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.144 qpair failed and we were unable to recover it. 00:29:01.144 [2024-07-25 01:29:23.622403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.144 [2024-07-25 01:29:23.622542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.144 [2024-07-25 01:29:23.622562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.144 [2024-07-25 01:29:23.622571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.144 [2024-07-25 01:29:23.622577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.144 [2024-07-25 01:29:23.622594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.144 qpair failed and we were unable to recover it. 00:29:01.144 [2024-07-25 01:29:23.632418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.144 [2024-07-25 01:29:23.632556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.144 [2024-07-25 01:29:23.632574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.144 [2024-07-25 01:29:23.632582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.144 [2024-07-25 01:29:23.632588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.144 [2024-07-25 01:29:23.632606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.144 qpair failed and we were unable to recover it. 00:29:01.404 [2024-07-25 01:29:23.642450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.404 [2024-07-25 01:29:23.642593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.404 [2024-07-25 01:29:23.642610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.404 [2024-07-25 01:29:23.642618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.404 [2024-07-25 01:29:23.642624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.404 [2024-07-25 01:29:23.642641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.404 qpair failed and we were unable to recover it. 00:29:01.404 [2024-07-25 01:29:23.652556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.404 [2024-07-25 01:29:23.652719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.404 [2024-07-25 01:29:23.652736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.404 [2024-07-25 01:29:23.652744] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.404 [2024-07-25 01:29:23.652750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.404 [2024-07-25 01:29:23.652767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.404 qpair failed and we were unable to recover it. 00:29:01.404 [2024-07-25 01:29:23.662507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.404 [2024-07-25 01:29:23.662651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.404 [2024-07-25 01:29:23.662668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.404 [2024-07-25 01:29:23.662676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.404 [2024-07-25 01:29:23.662682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.404 [2024-07-25 01:29:23.662703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.404 qpair failed and we were unable to recover it. 00:29:01.404 [2024-07-25 01:29:23.672654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.404 [2024-07-25 01:29:23.672830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.404 [2024-07-25 01:29:23.672847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.404 [2024-07-25 01:29:23.672854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.404 [2024-07-25 01:29:23.672861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.404 [2024-07-25 01:29:23.672878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.404 qpair failed and we were unable to recover it. 00:29:01.404 [2024-07-25 01:29:23.682643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.404 [2024-07-25 01:29:23.682783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.404 [2024-07-25 01:29:23.682800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.404 [2024-07-25 01:29:23.682808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.404 [2024-07-25 01:29:23.682814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.404 [2024-07-25 01:29:23.682832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.404 qpair failed and we were unable to recover it. 00:29:01.404 [2024-07-25 01:29:23.692838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.404 [2024-07-25 01:29:23.692980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.404 [2024-07-25 01:29:23.692997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.404 [2024-07-25 01:29:23.693004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.404 [2024-07-25 01:29:23.693010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.405 [2024-07-25 01:29:23.693027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.405 qpair failed and we were unable to recover it. 00:29:01.405 [2024-07-25 01:29:23.702676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.405 [2024-07-25 01:29:23.702819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.405 [2024-07-25 01:29:23.702836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.405 [2024-07-25 01:29:23.702843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.405 [2024-07-25 01:29:23.702849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.405 [2024-07-25 01:29:23.702866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.405 qpair failed and we were unable to recover it. 00:29:01.405 [2024-07-25 01:29:23.712661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.405 [2024-07-25 01:29:23.712810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.405 [2024-07-25 01:29:23.712831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.405 [2024-07-25 01:29:23.712838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.405 [2024-07-25 01:29:23.712845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.405 [2024-07-25 01:29:23.712861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.405 qpair failed and we were unable to recover it. 00:29:01.405 [2024-07-25 01:29:23.722730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.405 [2024-07-25 01:29:23.722870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.405 [2024-07-25 01:29:23.722887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.405 [2024-07-25 01:29:23.722894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.405 [2024-07-25 01:29:23.722901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.405 [2024-07-25 01:29:23.722918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.405 qpair failed and we were unable to recover it. 00:29:01.405 [2024-07-25 01:29:23.732769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.405 [2024-07-25 01:29:23.732910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.405 [2024-07-25 01:29:23.732928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.405 [2024-07-25 01:29:23.732935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.405 [2024-07-25 01:29:23.732941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.405 [2024-07-25 01:29:23.732957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.405 qpair failed and we were unable to recover it. 00:29:01.405 [2024-07-25 01:29:23.742836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.405 [2024-07-25 01:29:23.742979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.405 [2024-07-25 01:29:23.742995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.405 [2024-07-25 01:29:23.743003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.405 [2024-07-25 01:29:23.743009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.405 [2024-07-25 01:29:23.743026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.405 qpair failed and we were unable to recover it. 00:29:01.405 [2024-07-25 01:29:23.752837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.405 [2024-07-25 01:29:23.752977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.405 [2024-07-25 01:29:23.752994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.405 [2024-07-25 01:29:23.753002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.405 [2024-07-25 01:29:23.753008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.405 [2024-07-25 01:29:23.753028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.405 qpair failed and we were unable to recover it. 00:29:01.405 [2024-07-25 01:29:23.762878] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.405 [2024-07-25 01:29:23.763023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.405 [2024-07-25 01:29:23.763041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.405 [2024-07-25 01:29:23.763055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.405 [2024-07-25 01:29:23.763061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.405 [2024-07-25 01:29:23.763078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.405 qpair failed and we were unable to recover it. 00:29:01.405 [2024-07-25 01:29:23.772904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.405 [2024-07-25 01:29:23.773054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.405 [2024-07-25 01:29:23.773072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.405 [2024-07-25 01:29:23.773080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.405 [2024-07-25 01:29:23.773087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.405 [2024-07-25 01:29:23.773103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.405 qpair failed and we were unable to recover it. 00:29:01.405 [2024-07-25 01:29:23.782920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.405 [2024-07-25 01:29:23.783066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.405 [2024-07-25 01:29:23.783084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.405 [2024-07-25 01:29:23.783092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.405 [2024-07-25 01:29:23.783098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.405 [2024-07-25 01:29:23.783115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.405 qpair failed and we were unable to recover it. 00:29:01.405 [2024-07-25 01:29:23.792941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.405 [2024-07-25 01:29:23.793083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.405 [2024-07-25 01:29:23.793102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.405 [2024-07-25 01:29:23.793111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.405 [2024-07-25 01:29:23.793117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.405 [2024-07-25 01:29:23.793136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.405 qpair failed and we were unable to recover it. 00:29:01.405 [2024-07-25 01:29:23.802982] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.405 [2024-07-25 01:29:23.803133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.405 [2024-07-25 01:29:23.803152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.405 [2024-07-25 01:29:23.803159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.405 [2024-07-25 01:29:23.803166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.405 [2024-07-25 01:29:23.803183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.405 qpair failed and we were unable to recover it. 00:29:01.405 [2024-07-25 01:29:23.813017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.405 [2024-07-25 01:29:23.813159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.405 [2024-07-25 01:29:23.813177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.405 [2024-07-25 01:29:23.813184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.405 [2024-07-25 01:29:23.813191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.405 [2024-07-25 01:29:23.813207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.405 qpair failed and we were unable to recover it. 00:29:01.405 [2024-07-25 01:29:23.823011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.405 [2024-07-25 01:29:23.823161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.405 [2024-07-25 01:29:23.823179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.405 [2024-07-25 01:29:23.823187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.405 [2024-07-25 01:29:23.823193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.405 [2024-07-25 01:29:23.823209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.406 qpair failed and we were unable to recover it. 00:29:01.406 [2024-07-25 01:29:23.833068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.406 [2024-07-25 01:29:23.833207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.406 [2024-07-25 01:29:23.833224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.406 [2024-07-25 01:29:23.833231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.406 [2024-07-25 01:29:23.833237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.406 [2024-07-25 01:29:23.833253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.406 qpair failed and we were unable to recover it. 00:29:01.406 [2024-07-25 01:29:23.843096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.406 [2024-07-25 01:29:23.843236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.406 [2024-07-25 01:29:23.843253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.406 [2024-07-25 01:29:23.843260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.406 [2024-07-25 01:29:23.843272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.406 [2024-07-25 01:29:23.843289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.406 qpair failed and we were unable to recover it. 00:29:01.406 [2024-07-25 01:29:23.853068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.406 [2024-07-25 01:29:23.853208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.406 [2024-07-25 01:29:23.853225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.406 [2024-07-25 01:29:23.853232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.406 [2024-07-25 01:29:23.853238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.406 [2024-07-25 01:29:23.853255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.406 qpair failed and we were unable to recover it. 00:29:01.406 [2024-07-25 01:29:23.863166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.406 [2024-07-25 01:29:23.863507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.406 [2024-07-25 01:29:23.863525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.406 [2024-07-25 01:29:23.863532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.406 [2024-07-25 01:29:23.863539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.406 [2024-07-25 01:29:23.863556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.406 qpair failed and we were unable to recover it. 00:29:01.406 [2024-07-25 01:29:23.873217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.406 [2024-07-25 01:29:23.873391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.406 [2024-07-25 01:29:23.873409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.406 [2024-07-25 01:29:23.873417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.406 [2024-07-25 01:29:23.873424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.406 [2024-07-25 01:29:23.873441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.406 qpair failed and we were unable to recover it. 00:29:01.406 [2024-07-25 01:29:23.883232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.406 [2024-07-25 01:29:23.883390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.406 [2024-07-25 01:29:23.883406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.406 [2024-07-25 01:29:23.883414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.406 [2024-07-25 01:29:23.883420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.406 [2024-07-25 01:29:23.883437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.406 qpair failed and we were unable to recover it. 00:29:01.406 [2024-07-25 01:29:23.893262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.406 [2024-07-25 01:29:23.893401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.406 [2024-07-25 01:29:23.893418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.406 [2024-07-25 01:29:23.893425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.406 [2024-07-25 01:29:23.893432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.406 [2024-07-25 01:29:23.893448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.406 qpair failed and we were unable to recover it. 00:29:01.668 [2024-07-25 01:29:23.903270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.668 [2024-07-25 01:29:23.903413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.668 [2024-07-25 01:29:23.903432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.668 [2024-07-25 01:29:23.903440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.668 [2024-07-25 01:29:23.903447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.668 [2024-07-25 01:29:23.903464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.668 qpair failed and we were unable to recover it. 00:29:01.668 [2024-07-25 01:29:23.913309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.668 [2024-07-25 01:29:23.913451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.668 [2024-07-25 01:29:23.913467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.668 [2024-07-25 01:29:23.913475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.668 [2024-07-25 01:29:23.913481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.668 [2024-07-25 01:29:23.913498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.668 qpair failed and we were unable to recover it. 00:29:01.668 [2024-07-25 01:29:23.923329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.668 [2024-07-25 01:29:23.923467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.668 [2024-07-25 01:29:23.923484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.668 [2024-07-25 01:29:23.923491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.668 [2024-07-25 01:29:23.923497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.668 [2024-07-25 01:29:23.923514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.668 qpair failed and we were unable to recover it. 00:29:01.668 [2024-07-25 01:29:23.933358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.668 [2024-07-25 01:29:23.933496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.668 [2024-07-25 01:29:23.933513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.668 [2024-07-25 01:29:23.933523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.668 [2024-07-25 01:29:23.933530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.668 [2024-07-25 01:29:23.933547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.668 qpair failed and we were unable to recover it. 00:29:01.668 [2024-07-25 01:29:23.943384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.668 [2024-07-25 01:29:23.943525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.668 [2024-07-25 01:29:23.943542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.668 [2024-07-25 01:29:23.943549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.668 [2024-07-25 01:29:23.943556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.668 [2024-07-25 01:29:23.943573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.668 qpair failed and we were unable to recover it. 00:29:01.668 [2024-07-25 01:29:23.953342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.668 [2024-07-25 01:29:23.953533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.668 [2024-07-25 01:29:23.953551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.668 [2024-07-25 01:29:23.953559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.668 [2024-07-25 01:29:23.953565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.668 [2024-07-25 01:29:23.953582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.668 qpair failed and we were unable to recover it. 00:29:01.668 [2024-07-25 01:29:23.963359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.668 [2024-07-25 01:29:23.963509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.668 [2024-07-25 01:29:23.963526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.668 [2024-07-25 01:29:23.963534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.668 [2024-07-25 01:29:23.963539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.669 [2024-07-25 01:29:23.963557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.669 qpair failed and we were unable to recover it. 00:29:01.669 [2024-07-25 01:29:23.973477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.669 [2024-07-25 01:29:23.973623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.669 [2024-07-25 01:29:23.973640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.669 [2024-07-25 01:29:23.973647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.669 [2024-07-25 01:29:23.973654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.669 [2024-07-25 01:29:23.973671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.669 qpair failed and we were unable to recover it. 00:29:01.669 [2024-07-25 01:29:23.983493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.669 [2024-07-25 01:29:23.983636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.669 [2024-07-25 01:29:23.983653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.669 [2024-07-25 01:29:23.983661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.669 [2024-07-25 01:29:23.983667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.669 [2024-07-25 01:29:23.983684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.669 qpair failed and we were unable to recover it. 00:29:01.669 [2024-07-25 01:29:23.993529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.669 [2024-07-25 01:29:23.993666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.669 [2024-07-25 01:29:23.993683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.669 [2024-07-25 01:29:23.993691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.669 [2024-07-25 01:29:23.993697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.669 [2024-07-25 01:29:23.993714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.669 qpair failed and we were unable to recover it. 00:29:01.669 [2024-07-25 01:29:24.003593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.669 [2024-07-25 01:29:24.003737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.669 [2024-07-25 01:29:24.003754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.669 [2024-07-25 01:29:24.003762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.669 [2024-07-25 01:29:24.003768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.669 [2024-07-25 01:29:24.003785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.669 qpair failed and we were unable to recover it. 00:29:01.669 [2024-07-25 01:29:24.013642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.669 [2024-07-25 01:29:24.013794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.669 [2024-07-25 01:29:24.013811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.669 [2024-07-25 01:29:24.013819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.669 [2024-07-25 01:29:24.013825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.669 [2024-07-25 01:29:24.013842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.669 qpair failed and we were unable to recover it. 00:29:01.669 [2024-07-25 01:29:24.023608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.669 [2024-07-25 01:29:24.023749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.669 [2024-07-25 01:29:24.023769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.669 [2024-07-25 01:29:24.023777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.669 [2024-07-25 01:29:24.023783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.669 [2024-07-25 01:29:24.023800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.669 qpair failed and we were unable to recover it. 00:29:01.669 [2024-07-25 01:29:24.033550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.669 [2024-07-25 01:29:24.033697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.669 [2024-07-25 01:29:24.033714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.669 [2024-07-25 01:29:24.033722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.669 [2024-07-25 01:29:24.033728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.669 [2024-07-25 01:29:24.033745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.669 qpair failed and we were unable to recover it. 00:29:01.669 [2024-07-25 01:29:24.043670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.669 [2024-07-25 01:29:24.043812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.669 [2024-07-25 01:29:24.043829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.669 [2024-07-25 01:29:24.043836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.669 [2024-07-25 01:29:24.043842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.669 [2024-07-25 01:29:24.043859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.669 qpair failed and we were unable to recover it. 00:29:01.669 [2024-07-25 01:29:24.053720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.669 [2024-07-25 01:29:24.053860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.669 [2024-07-25 01:29:24.053878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.669 [2024-07-25 01:29:24.053885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.669 [2024-07-25 01:29:24.053892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.669 [2024-07-25 01:29:24.053908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.669 qpair failed and we were unable to recover it. 00:29:01.669 [2024-07-25 01:29:24.063729] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.669 [2024-07-25 01:29:24.063868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.669 [2024-07-25 01:29:24.063885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.669 [2024-07-25 01:29:24.063893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.669 [2024-07-25 01:29:24.063899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.669 [2024-07-25 01:29:24.063920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.669 qpair failed and we were unable to recover it. 00:29:01.669 [2024-07-25 01:29:24.073744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.669 [2024-07-25 01:29:24.073882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.669 [2024-07-25 01:29:24.073899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.669 [2024-07-25 01:29:24.073907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.669 [2024-07-25 01:29:24.073913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.669 [2024-07-25 01:29:24.073930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.669 qpair failed and we were unable to recover it. 00:29:01.669 [2024-07-25 01:29:24.083767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.669 [2024-07-25 01:29:24.083904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.669 [2024-07-25 01:29:24.083921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.669 [2024-07-25 01:29:24.083928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.669 [2024-07-25 01:29:24.083934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.669 [2024-07-25 01:29:24.083952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.669 qpair failed and we were unable to recover it. 00:29:01.669 [2024-07-25 01:29:24.093815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.669 [2024-07-25 01:29:24.093958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.669 [2024-07-25 01:29:24.093975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.670 [2024-07-25 01:29:24.093983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.670 [2024-07-25 01:29:24.093989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.670 [2024-07-25 01:29:24.094006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.670 qpair failed and we were unable to recover it. 00:29:01.670 [2024-07-25 01:29:24.103841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.670 [2024-07-25 01:29:24.103986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.670 [2024-07-25 01:29:24.104003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.670 [2024-07-25 01:29:24.104011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.670 [2024-07-25 01:29:24.104017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.670 [2024-07-25 01:29:24.104034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.670 qpair failed and we were unable to recover it. 00:29:01.670 [2024-07-25 01:29:24.113875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.670 [2024-07-25 01:29:24.114009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.670 [2024-07-25 01:29:24.114030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.670 [2024-07-25 01:29:24.114038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.670 [2024-07-25 01:29:24.114050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.670 [2024-07-25 01:29:24.114067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.670 qpair failed and we were unable to recover it. 00:29:01.670 [2024-07-25 01:29:24.123815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.670 [2024-07-25 01:29:24.123958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.670 [2024-07-25 01:29:24.123975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.670 [2024-07-25 01:29:24.123982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.670 [2024-07-25 01:29:24.123988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.670 [2024-07-25 01:29:24.124005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.670 qpair failed and we were unable to recover it. 00:29:01.670 [2024-07-25 01:29:24.133974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.670 [2024-07-25 01:29:24.134120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.670 [2024-07-25 01:29:24.134138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.670 [2024-07-25 01:29:24.134145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.670 [2024-07-25 01:29:24.134152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.670 [2024-07-25 01:29:24.134168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.670 qpair failed and we were unable to recover it. 00:29:01.670 [2024-07-25 01:29:24.143948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.670 [2024-07-25 01:29:24.144099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.670 [2024-07-25 01:29:24.144116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.670 [2024-07-25 01:29:24.144123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.670 [2024-07-25 01:29:24.144130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.670 [2024-07-25 01:29:24.144147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.670 qpair failed and we were unable to recover it. 00:29:01.670 [2024-07-25 01:29:24.153910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.670 [2024-07-25 01:29:24.154057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.670 [2024-07-25 01:29:24.154075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.670 [2024-07-25 01:29:24.154082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.670 [2024-07-25 01:29:24.154088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.670 [2024-07-25 01:29:24.154109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.670 qpair failed and we were unable to recover it. 00:29:01.931 [2024-07-25 01:29:24.164035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.931 [2024-07-25 01:29:24.164174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.931 [2024-07-25 01:29:24.164191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.931 [2024-07-25 01:29:24.164199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.931 [2024-07-25 01:29:24.164205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.931 [2024-07-25 01:29:24.164222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.931 qpair failed and we were unable to recover it. 00:29:01.931 [2024-07-25 01:29:24.174038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.931 [2024-07-25 01:29:24.174197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.931 [2024-07-25 01:29:24.174214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.931 [2024-07-25 01:29:24.174222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.931 [2024-07-25 01:29:24.174228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.931 [2024-07-25 01:29:24.174245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.931 qpair failed and we were unable to recover it. 00:29:01.931 [2024-07-25 01:29:24.183991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.931 [2024-07-25 01:29:24.184148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.931 [2024-07-25 01:29:24.184165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.931 [2024-07-25 01:29:24.184173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.931 [2024-07-25 01:29:24.184179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.931 [2024-07-25 01:29:24.184196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.931 qpair failed and we were unable to recover it. 00:29:01.931 [2024-07-25 01:29:24.194108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.931 [2024-07-25 01:29:24.194244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.931 [2024-07-25 01:29:24.194261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.931 [2024-07-25 01:29:24.194269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.931 [2024-07-25 01:29:24.194275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.931 [2024-07-25 01:29:24.194291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.931 qpair failed and we were unable to recover it. 00:29:01.931 [2024-07-25 01:29:24.204136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.931 [2024-07-25 01:29:24.204280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.931 [2024-07-25 01:29:24.204301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.931 [2024-07-25 01:29:24.204309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.931 [2024-07-25 01:29:24.204315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.931 [2024-07-25 01:29:24.204331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.931 qpair failed and we were unable to recover it. 00:29:01.931 [2024-07-25 01:29:24.214177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.931 [2024-07-25 01:29:24.214318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.931 [2024-07-25 01:29:24.214336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.931 [2024-07-25 01:29:24.214344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.931 [2024-07-25 01:29:24.214350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.931 [2024-07-25 01:29:24.214366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.931 qpair failed and we were unable to recover it. 00:29:01.931 [2024-07-25 01:29:24.224116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.931 [2024-07-25 01:29:24.224254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.931 [2024-07-25 01:29:24.224271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.931 [2024-07-25 01:29:24.224279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.931 [2024-07-25 01:29:24.224285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.931 [2024-07-25 01:29:24.224302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.931 qpair failed and we were unable to recover it. 00:29:01.931 [2024-07-25 01:29:24.234226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.931 [2024-07-25 01:29:24.234364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.931 [2024-07-25 01:29:24.234381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.931 [2024-07-25 01:29:24.234389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.931 [2024-07-25 01:29:24.234395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.931 [2024-07-25 01:29:24.234412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.931 qpair failed and we were unable to recover it. 00:29:01.931 [2024-07-25 01:29:24.244267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.931 [2024-07-25 01:29:24.244404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.931 [2024-07-25 01:29:24.244422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.931 [2024-07-25 01:29:24.244429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.931 [2024-07-25 01:29:24.244438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.931 [2024-07-25 01:29:24.244455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.931 qpair failed and we were unable to recover it. 00:29:01.931 [2024-07-25 01:29:24.254293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.931 [2024-07-25 01:29:24.254449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.932 [2024-07-25 01:29:24.254466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.932 [2024-07-25 01:29:24.254473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.932 [2024-07-25 01:29:24.254479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.932 [2024-07-25 01:29:24.254496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.932 qpair failed and we were unable to recover it. 00:29:01.932 [2024-07-25 01:29:24.264331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.932 [2024-07-25 01:29:24.264470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.932 [2024-07-25 01:29:24.264487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.932 [2024-07-25 01:29:24.264494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.932 [2024-07-25 01:29:24.264501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.932 [2024-07-25 01:29:24.264517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.932 qpair failed and we were unable to recover it. 00:29:01.932 [2024-07-25 01:29:24.274352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.932 [2024-07-25 01:29:24.274489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.932 [2024-07-25 01:29:24.274507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.932 [2024-07-25 01:29:24.274514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.932 [2024-07-25 01:29:24.274521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.932 [2024-07-25 01:29:24.274538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.932 qpair failed and we were unable to recover it. 00:29:01.932 [2024-07-25 01:29:24.284370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.932 [2024-07-25 01:29:24.284513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.932 [2024-07-25 01:29:24.284530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.932 [2024-07-25 01:29:24.284537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.932 [2024-07-25 01:29:24.284543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.932 [2024-07-25 01:29:24.284560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.932 qpair failed and we were unable to recover it. 00:29:01.932 [2024-07-25 01:29:24.294397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.932 [2024-07-25 01:29:24.294545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.932 [2024-07-25 01:29:24.294562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.932 [2024-07-25 01:29:24.294570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.932 [2024-07-25 01:29:24.294576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.932 [2024-07-25 01:29:24.294592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.932 qpair failed and we were unable to recover it. 00:29:01.932 [2024-07-25 01:29:24.304657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.932 [2024-07-25 01:29:24.304840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.932 [2024-07-25 01:29:24.304858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.932 [2024-07-25 01:29:24.304866] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.932 [2024-07-25 01:29:24.304872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.932 [2024-07-25 01:29:24.304890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.932 qpair failed and we were unable to recover it. 00:29:01.932 [2024-07-25 01:29:24.314424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.932 [2024-07-25 01:29:24.314562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.932 [2024-07-25 01:29:24.314580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.932 [2024-07-25 01:29:24.314587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.932 [2024-07-25 01:29:24.314593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.932 [2024-07-25 01:29:24.314610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.932 qpair failed and we were unable to recover it. 00:29:01.932 [2024-07-25 01:29:24.324479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.932 [2024-07-25 01:29:24.324618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.932 [2024-07-25 01:29:24.324635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.932 [2024-07-25 01:29:24.324642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.932 [2024-07-25 01:29:24.324649] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.932 [2024-07-25 01:29:24.324666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.932 qpair failed and we were unable to recover it. 00:29:01.932 [2024-07-25 01:29:24.334441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.932 [2024-07-25 01:29:24.334579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.932 [2024-07-25 01:29:24.334596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.932 [2024-07-25 01:29:24.334607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.932 [2024-07-25 01:29:24.334613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.932 [2024-07-25 01:29:24.334630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.932 qpair failed and we were unable to recover it. 00:29:01.932 [2024-07-25 01:29:24.344652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.932 [2024-07-25 01:29:24.344798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.932 [2024-07-25 01:29:24.344815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.932 [2024-07-25 01:29:24.344823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.932 [2024-07-25 01:29:24.344829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.932 [2024-07-25 01:29:24.344846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.932 qpair failed and we were unable to recover it. 00:29:01.932 [2024-07-25 01:29:24.354566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.932 [2024-07-25 01:29:24.354703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.932 [2024-07-25 01:29:24.354721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.932 [2024-07-25 01:29:24.354728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.932 [2024-07-25 01:29:24.354735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.932 [2024-07-25 01:29:24.354752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.932 qpair failed and we were unable to recover it. 00:29:01.932 [2024-07-25 01:29:24.364593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.932 [2024-07-25 01:29:24.364730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.932 [2024-07-25 01:29:24.364747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.932 [2024-07-25 01:29:24.364755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.932 [2024-07-25 01:29:24.364761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.932 [2024-07-25 01:29:24.364778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.932 qpair failed and we were unable to recover it. 00:29:01.932 [2024-07-25 01:29:24.374626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.933 [2024-07-25 01:29:24.374778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.933 [2024-07-25 01:29:24.374795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.933 [2024-07-25 01:29:24.374803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.933 [2024-07-25 01:29:24.374809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.933 [2024-07-25 01:29:24.374825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.933 qpair failed and we were unable to recover it. 00:29:01.933 [2024-07-25 01:29:24.384577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.933 [2024-07-25 01:29:24.384719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.933 [2024-07-25 01:29:24.384736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.933 [2024-07-25 01:29:24.384743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.933 [2024-07-25 01:29:24.384749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.933 [2024-07-25 01:29:24.384766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.933 qpair failed and we were unable to recover it. 00:29:01.933 [2024-07-25 01:29:24.394679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.933 [2024-07-25 01:29:24.394821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.933 [2024-07-25 01:29:24.394838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.933 [2024-07-25 01:29:24.394846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.933 [2024-07-25 01:29:24.394852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.933 [2024-07-25 01:29:24.394870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.933 qpair failed and we were unable to recover it. 00:29:01.933 [2024-07-25 01:29:24.404637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.933 [2024-07-25 01:29:24.404778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.933 [2024-07-25 01:29:24.404797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.933 [2024-07-25 01:29:24.404805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.933 [2024-07-25 01:29:24.404811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.933 [2024-07-25 01:29:24.404829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.933 qpair failed and we were unable to recover it. 00:29:01.933 [2024-07-25 01:29:24.414743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.933 [2024-07-25 01:29:24.414880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.933 [2024-07-25 01:29:24.414899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.933 [2024-07-25 01:29:24.414907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.933 [2024-07-25 01:29:24.414913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:01.933 [2024-07-25 01:29:24.414931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.933 qpair failed and we were unable to recover it. 00:29:02.194 [2024-07-25 01:29:24.424767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.194 [2024-07-25 01:29:24.424910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.194 [2024-07-25 01:29:24.424929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.194 [2024-07-25 01:29:24.424941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.194 [2024-07-25 01:29:24.424947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.194 [2024-07-25 01:29:24.424964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.194 qpair failed and we were unable to recover it. 00:29:02.194 [2024-07-25 01:29:24.434722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.194 [2024-07-25 01:29:24.434862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.194 [2024-07-25 01:29:24.434880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.194 [2024-07-25 01:29:24.434887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.194 [2024-07-25 01:29:24.434894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.194 [2024-07-25 01:29:24.434911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.194 qpair failed and we were unable to recover it. 00:29:02.194 [2024-07-25 01:29:24.444858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.194 [2024-07-25 01:29:24.445026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.194 [2024-07-25 01:29:24.445050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.194 [2024-07-25 01:29:24.445058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.194 [2024-07-25 01:29:24.445065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.194 [2024-07-25 01:29:24.445083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.194 qpair failed and we were unable to recover it. 00:29:02.194 [2024-07-25 01:29:24.454850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.194 [2024-07-25 01:29:24.454991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.194 [2024-07-25 01:29:24.455008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.194 [2024-07-25 01:29:24.455015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.194 [2024-07-25 01:29:24.455021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.194 [2024-07-25 01:29:24.455038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.194 qpair failed and we were unable to recover it. 00:29:02.194 [2024-07-25 01:29:24.464864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.194 [2024-07-25 01:29:24.465002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.194 [2024-07-25 01:29:24.465019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.194 [2024-07-25 01:29:24.465027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.194 [2024-07-25 01:29:24.465033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.194 [2024-07-25 01:29:24.465055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.194 qpair failed and we were unable to recover it. 00:29:02.194 [2024-07-25 01:29:24.474929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.194 [2024-07-25 01:29:24.475099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.194 [2024-07-25 01:29:24.475117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.194 [2024-07-25 01:29:24.475124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.194 [2024-07-25 01:29:24.475130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.194 [2024-07-25 01:29:24.475147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.194 qpair failed and we were unable to recover it. 00:29:02.194 [2024-07-25 01:29:24.484928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.194 [2024-07-25 01:29:24.485075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.194 [2024-07-25 01:29:24.485092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.194 [2024-07-25 01:29:24.485099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.194 [2024-07-25 01:29:24.485106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.194 [2024-07-25 01:29:24.485123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.194 qpair failed and we were unable to recover it. 00:29:02.194 [2024-07-25 01:29:24.494989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.194 [2024-07-25 01:29:24.495142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.194 [2024-07-25 01:29:24.495160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.194 [2024-07-25 01:29:24.495167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.194 [2024-07-25 01:29:24.495173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.194 [2024-07-25 01:29:24.495190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.194 qpair failed and we were unable to recover it. 00:29:02.194 [2024-07-25 01:29:24.504911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.194 [2024-07-25 01:29:24.505095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.194 [2024-07-25 01:29:24.505112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.194 [2024-07-25 01:29:24.505119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.194 [2024-07-25 01:29:24.505125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.194 [2024-07-25 01:29:24.505142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.194 qpair failed and we were unable to recover it. 00:29:02.194 [2024-07-25 01:29:24.515046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.194 [2024-07-25 01:29:24.515208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.194 [2024-07-25 01:29:24.515229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.194 [2024-07-25 01:29:24.515237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.194 [2024-07-25 01:29:24.515243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.194 [2024-07-25 01:29:24.515261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.194 qpair failed and we were unable to recover it. 00:29:02.194 [2024-07-25 01:29:24.525047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.194 [2024-07-25 01:29:24.525179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.194 [2024-07-25 01:29:24.525197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.194 [2024-07-25 01:29:24.525204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.194 [2024-07-25 01:29:24.525210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.194 [2024-07-25 01:29:24.525227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.194 qpair failed and we were unable to recover it. 00:29:02.194 [2024-07-25 01:29:24.535085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.195 [2024-07-25 01:29:24.535225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.195 [2024-07-25 01:29:24.535242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.195 [2024-07-25 01:29:24.535250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.195 [2024-07-25 01:29:24.535255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.195 [2024-07-25 01:29:24.535272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.195 qpair failed and we were unable to recover it. 00:29:02.195 [2024-07-25 01:29:24.545108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.195 [2024-07-25 01:29:24.545250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.195 [2024-07-25 01:29:24.545267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.195 [2024-07-25 01:29:24.545275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.195 [2024-07-25 01:29:24.545282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.195 [2024-07-25 01:29:24.545298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.195 qpair failed and we were unable to recover it. 00:29:02.195 [2024-07-25 01:29:24.555137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.195 [2024-07-25 01:29:24.555277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.195 [2024-07-25 01:29:24.555294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.195 [2024-07-25 01:29:24.555301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.195 [2024-07-25 01:29:24.555308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.195 [2024-07-25 01:29:24.555331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.195 qpair failed and we were unable to recover it. 00:29:02.195 [2024-07-25 01:29:24.565070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.195 [2024-07-25 01:29:24.565212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.195 [2024-07-25 01:29:24.565229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.195 [2024-07-25 01:29:24.565237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.195 [2024-07-25 01:29:24.565243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.195 [2024-07-25 01:29:24.565260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.195 qpair failed and we were unable to recover it. 00:29:02.195 [2024-07-25 01:29:24.575195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.195 [2024-07-25 01:29:24.575332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.195 [2024-07-25 01:29:24.575349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.195 [2024-07-25 01:29:24.575356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.195 [2024-07-25 01:29:24.575363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.195 [2024-07-25 01:29:24.575379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.195 qpair failed and we were unable to recover it. 00:29:02.195 [2024-07-25 01:29:24.585217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.195 [2024-07-25 01:29:24.585351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.195 [2024-07-25 01:29:24.585368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.195 [2024-07-25 01:29:24.585375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.195 [2024-07-25 01:29:24.585381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.195 [2024-07-25 01:29:24.585398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.195 qpair failed and we were unable to recover it. 00:29:02.195 [2024-07-25 01:29:24.595249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.195 [2024-07-25 01:29:24.595392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.195 [2024-07-25 01:29:24.595409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.195 [2024-07-25 01:29:24.595416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.195 [2024-07-25 01:29:24.595422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.195 [2024-07-25 01:29:24.595439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.195 qpair failed and we were unable to recover it. 00:29:02.195 [2024-07-25 01:29:24.605264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.195 [2024-07-25 01:29:24.605405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.195 [2024-07-25 01:29:24.605425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.195 [2024-07-25 01:29:24.605433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.195 [2024-07-25 01:29:24.605439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.195 [2024-07-25 01:29:24.605455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.195 qpair failed and we were unable to recover it. 00:29:02.195 [2024-07-25 01:29:24.615307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.195 [2024-07-25 01:29:24.615445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.195 [2024-07-25 01:29:24.615461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.195 [2024-07-25 01:29:24.615469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.195 [2024-07-25 01:29:24.615475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.195 [2024-07-25 01:29:24.615492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.195 qpair failed and we were unable to recover it. 00:29:02.195 [2024-07-25 01:29:24.625246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.195 [2024-07-25 01:29:24.625398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.195 [2024-07-25 01:29:24.625415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.195 [2024-07-25 01:29:24.625423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.195 [2024-07-25 01:29:24.625429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.195 [2024-07-25 01:29:24.625445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.195 qpair failed and we were unable to recover it. 00:29:02.195 [2024-07-25 01:29:24.635354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.195 [2024-07-25 01:29:24.635492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.195 [2024-07-25 01:29:24.635509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.195 [2024-07-25 01:29:24.635516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.195 [2024-07-25 01:29:24.635522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.195 [2024-07-25 01:29:24.635539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.195 qpair failed and we were unable to recover it. 00:29:02.195 [2024-07-25 01:29:24.645355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.195 [2024-07-25 01:29:24.645494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.195 [2024-07-25 01:29:24.645510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.195 [2024-07-25 01:29:24.645518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.195 [2024-07-25 01:29:24.645527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.195 [2024-07-25 01:29:24.645544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.195 qpair failed and we were unable to recover it. 00:29:02.195 [2024-07-25 01:29:24.655411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.195 [2024-07-25 01:29:24.655555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.195 [2024-07-25 01:29:24.655573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.195 [2024-07-25 01:29:24.655581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.196 [2024-07-25 01:29:24.655587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.196 [2024-07-25 01:29:24.655604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.196 qpair failed and we were unable to recover it. 00:29:02.196 [2024-07-25 01:29:24.665430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.196 [2024-07-25 01:29:24.665570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.196 [2024-07-25 01:29:24.665588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.196 [2024-07-25 01:29:24.665596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.196 [2024-07-25 01:29:24.665603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.196 [2024-07-25 01:29:24.665621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.196 qpair failed and we were unable to recover it. 00:29:02.196 [2024-07-25 01:29:24.675501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.196 [2024-07-25 01:29:24.675641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.196 [2024-07-25 01:29:24.675658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.196 [2024-07-25 01:29:24.675666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.196 [2024-07-25 01:29:24.675673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.196 [2024-07-25 01:29:24.675690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.196 qpair failed and we were unable to recover it. 00:29:02.457 [2024-07-25 01:29:24.685489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.457 [2024-07-25 01:29:24.685628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.457 [2024-07-25 01:29:24.685645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.457 [2024-07-25 01:29:24.685653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.457 [2024-07-25 01:29:24.685659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.457 [2024-07-25 01:29:24.685677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.457 qpair failed and we were unable to recover it. 00:29:02.457 [2024-07-25 01:29:24.695527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.457 [2024-07-25 01:29:24.695671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.457 [2024-07-25 01:29:24.695691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.457 [2024-07-25 01:29:24.695698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.457 [2024-07-25 01:29:24.695705] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.457 [2024-07-25 01:29:24.695723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.457 qpair failed and we were unable to recover it. 00:29:02.457 [2024-07-25 01:29:24.705563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.457 [2024-07-25 01:29:24.705708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.457 [2024-07-25 01:29:24.705728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.457 [2024-07-25 01:29:24.705736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.457 [2024-07-25 01:29:24.705743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.457 [2024-07-25 01:29:24.705762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.457 qpair failed and we were unable to recover it. 00:29:02.457 [2024-07-25 01:29:24.715587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.457 [2024-07-25 01:29:24.715726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.457 [2024-07-25 01:29:24.715742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.457 [2024-07-25 01:29:24.715750] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.457 [2024-07-25 01:29:24.715756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.457 [2024-07-25 01:29:24.715774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.457 qpair failed and we were unable to recover it. 00:29:02.457 [2024-07-25 01:29:24.725576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.457 [2024-07-25 01:29:24.725766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.457 [2024-07-25 01:29:24.725783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.457 [2024-07-25 01:29:24.725791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.457 [2024-07-25 01:29:24.725798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.457 [2024-07-25 01:29:24.725815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.457 qpair failed and we were unable to recover it. 00:29:02.457 [2024-07-25 01:29:24.735629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.457 [2024-07-25 01:29:24.735768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.457 [2024-07-25 01:29:24.735786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.457 [2024-07-25 01:29:24.735797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.457 [2024-07-25 01:29:24.735804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.457 [2024-07-25 01:29:24.735821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.457 qpair failed and we were unable to recover it. 00:29:02.457 [2024-07-25 01:29:24.745584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.457 [2024-07-25 01:29:24.745725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.457 [2024-07-25 01:29:24.745742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.457 [2024-07-25 01:29:24.745749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.457 [2024-07-25 01:29:24.745756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.457 [2024-07-25 01:29:24.745773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.457 qpair failed and we were unable to recover it. 00:29:02.457 [2024-07-25 01:29:24.755613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.457 [2024-07-25 01:29:24.755756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.457 [2024-07-25 01:29:24.755774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.458 [2024-07-25 01:29:24.755782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.458 [2024-07-25 01:29:24.755788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.458 [2024-07-25 01:29:24.755804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.458 qpair failed and we were unable to recover it. 00:29:02.458 [2024-07-25 01:29:24.765724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.458 [2024-07-25 01:29:24.765904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.458 [2024-07-25 01:29:24.765920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.458 [2024-07-25 01:29:24.765928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.458 [2024-07-25 01:29:24.765935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.458 [2024-07-25 01:29:24.765951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.458 qpair failed and we were unable to recover it. 00:29:02.458 [2024-07-25 01:29:24.775937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.458 [2024-07-25 01:29:24.776084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.458 [2024-07-25 01:29:24.776101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.458 [2024-07-25 01:29:24.776108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.458 [2024-07-25 01:29:24.776114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.458 [2024-07-25 01:29:24.776131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.458 qpair failed and we were unable to recover it. 00:29:02.458 [2024-07-25 01:29:24.785780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.458 [2024-07-25 01:29:24.785922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.458 [2024-07-25 01:29:24.785939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.458 [2024-07-25 01:29:24.785947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.458 [2024-07-25 01:29:24.785953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.458 [2024-07-25 01:29:24.785970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.458 qpair failed and we were unable to recover it. 00:29:02.458 [2024-07-25 01:29:24.795785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.458 [2024-07-25 01:29:24.795929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.458 [2024-07-25 01:29:24.795949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.458 [2024-07-25 01:29:24.795957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.458 [2024-07-25 01:29:24.795964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.458 [2024-07-25 01:29:24.795981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.458 qpair failed and we were unable to recover it. 00:29:02.458 [2024-07-25 01:29:24.805832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.458 [2024-07-25 01:29:24.805974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.458 [2024-07-25 01:29:24.805992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.458 [2024-07-25 01:29:24.806000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.458 [2024-07-25 01:29:24.806006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.458 [2024-07-25 01:29:24.806025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.458 qpair failed and we were unable to recover it. 00:29:02.458 [2024-07-25 01:29:24.815819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.458 [2024-07-25 01:29:24.815962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.458 [2024-07-25 01:29:24.815979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.458 [2024-07-25 01:29:24.815986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.458 [2024-07-25 01:29:24.815992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.458 [2024-07-25 01:29:24.816009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.458 qpair failed and we were unable to recover it. 00:29:02.458 [2024-07-25 01:29:24.825898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.458 [2024-07-25 01:29:24.826032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.458 [2024-07-25 01:29:24.826056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.458 [2024-07-25 01:29:24.826069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.458 [2024-07-25 01:29:24.826076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.458 [2024-07-25 01:29:24.826092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.458 qpair failed and we were unable to recover it. 00:29:02.458 [2024-07-25 01:29:24.835928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.458 [2024-07-25 01:29:24.836072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.458 [2024-07-25 01:29:24.836089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.458 [2024-07-25 01:29:24.836097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.458 [2024-07-25 01:29:24.836103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.458 [2024-07-25 01:29:24.836120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.458 qpair failed and we were unable to recover it. 00:29:02.458 [2024-07-25 01:29:24.845976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.458 [2024-07-25 01:29:24.846124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.458 [2024-07-25 01:29:24.846142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.458 [2024-07-25 01:29:24.846149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.458 [2024-07-25 01:29:24.846155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.458 [2024-07-25 01:29:24.846172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.458 qpair failed and we were unable to recover it. 00:29:02.458 [2024-07-25 01:29:24.855988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.458 [2024-07-25 01:29:24.856133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.458 [2024-07-25 01:29:24.856150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.458 [2024-07-25 01:29:24.856158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.458 [2024-07-25 01:29:24.856164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.458 [2024-07-25 01:29:24.856181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.458 qpair failed and we were unable to recover it. 00:29:02.458 [2024-07-25 01:29:24.865959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.458 [2024-07-25 01:29:24.866141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.458 [2024-07-25 01:29:24.866158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.458 [2024-07-25 01:29:24.866166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.458 [2024-07-25 01:29:24.866172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.458 [2024-07-25 01:29:24.866189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.458 qpair failed and we were unable to recover it. 00:29:02.458 [2024-07-25 01:29:24.876053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.458 [2024-07-25 01:29:24.876194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.459 [2024-07-25 01:29:24.876211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.459 [2024-07-25 01:29:24.876219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.459 [2024-07-25 01:29:24.876225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.459 [2024-07-25 01:29:24.876241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.459 qpair failed and we were unable to recover it. 00:29:02.459 [2024-07-25 01:29:24.886060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.459 [2024-07-25 01:29:24.886198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.459 [2024-07-25 01:29:24.886215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.459 [2024-07-25 01:29:24.886223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.459 [2024-07-25 01:29:24.886229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.459 [2024-07-25 01:29:24.886246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.459 qpair failed and we were unable to recover it. 00:29:02.459 [2024-07-25 01:29:24.896119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.459 [2024-07-25 01:29:24.896268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.459 [2024-07-25 01:29:24.896286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.459 [2024-07-25 01:29:24.896293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.459 [2024-07-25 01:29:24.896300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.459 [2024-07-25 01:29:24.896316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.459 qpair failed and we were unable to recover it. 00:29:02.459 [2024-07-25 01:29:24.906148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.459 [2024-07-25 01:29:24.906292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.459 [2024-07-25 01:29:24.906309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.459 [2024-07-25 01:29:24.906317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.459 [2024-07-25 01:29:24.906323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.459 [2024-07-25 01:29:24.906340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.459 qpair failed and we were unable to recover it. 00:29:02.459 [2024-07-25 01:29:24.916153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.459 [2024-07-25 01:29:24.916295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.459 [2024-07-25 01:29:24.916315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.459 [2024-07-25 01:29:24.916322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.459 [2024-07-25 01:29:24.916329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.459 [2024-07-25 01:29:24.916345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.459 qpair failed and we were unable to recover it. 00:29:02.459 [2024-07-25 01:29:24.926203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.459 [2024-07-25 01:29:24.926340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.459 [2024-07-25 01:29:24.926357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.459 [2024-07-25 01:29:24.926365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.459 [2024-07-25 01:29:24.926371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.459 [2024-07-25 01:29:24.926387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.459 qpair failed and we were unable to recover it. 00:29:02.459 [2024-07-25 01:29:24.936225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.459 [2024-07-25 01:29:24.936367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.459 [2024-07-25 01:29:24.936383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.459 [2024-07-25 01:29:24.936391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.459 [2024-07-25 01:29:24.936397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.459 [2024-07-25 01:29:24.936413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.459 qpair failed and we were unable to recover it. 00:29:02.459 [2024-07-25 01:29:24.946257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.459 [2024-07-25 01:29:24.946399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.459 [2024-07-25 01:29:24.946416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.459 [2024-07-25 01:29:24.946424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.459 [2024-07-25 01:29:24.946430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.459 [2024-07-25 01:29:24.946447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.459 qpair failed and we were unable to recover it. 00:29:02.723 [2024-07-25 01:29:24.956207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.723 [2024-07-25 01:29:24.956346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.723 [2024-07-25 01:29:24.956365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.723 [2024-07-25 01:29:24.956373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.723 [2024-07-25 01:29:24.956380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.723 [2024-07-25 01:29:24.956401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.723 qpair failed and we were unable to recover it. 00:29:02.723 [2024-07-25 01:29:24.966296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.723 [2024-07-25 01:29:24.966441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.723 [2024-07-25 01:29:24.966460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.723 [2024-07-25 01:29:24.966467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.723 [2024-07-25 01:29:24.966474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.723 [2024-07-25 01:29:24.966491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.723 qpair failed and we were unable to recover it. 00:29:02.723 [2024-07-25 01:29:24.976268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.723 [2024-07-25 01:29:24.976408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.723 [2024-07-25 01:29:24.976425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.723 [2024-07-25 01:29:24.976432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.723 [2024-07-25 01:29:24.976439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.723 [2024-07-25 01:29:24.976455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.723 qpair failed and we were unable to recover it. 00:29:02.723 [2024-07-25 01:29:24.986302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.723 [2024-07-25 01:29:24.986444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.723 [2024-07-25 01:29:24.986461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.723 [2024-07-25 01:29:24.986468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.723 [2024-07-25 01:29:24.986474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.723 [2024-07-25 01:29:24.986491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.723 qpair failed and we were unable to recover it. 00:29:02.723 [2024-07-25 01:29:24.996328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.723 [2024-07-25 01:29:24.996464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.723 [2024-07-25 01:29:24.996481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.723 [2024-07-25 01:29:24.996489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.723 [2024-07-25 01:29:24.996495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.723 [2024-07-25 01:29:24.996512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.723 qpair failed and we were unable to recover it. 00:29:02.723 [2024-07-25 01:29:25.006367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.723 [2024-07-25 01:29:25.006515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.723 [2024-07-25 01:29:25.006536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.723 [2024-07-25 01:29:25.006544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.723 [2024-07-25 01:29:25.006550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.723 [2024-07-25 01:29:25.006567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.723 qpair failed and we were unable to recover it. 00:29:02.723 [2024-07-25 01:29:25.016399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.723 [2024-07-25 01:29:25.016581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.723 [2024-07-25 01:29:25.016608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.723 [2024-07-25 01:29:25.016616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.723 [2024-07-25 01:29:25.016622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.723 [2024-07-25 01:29:25.016640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.723 qpair failed and we were unable to recover it. 00:29:02.723 [2024-07-25 01:29:25.026493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.723 [2024-07-25 01:29:25.026636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.723 [2024-07-25 01:29:25.026654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.723 [2024-07-25 01:29:25.026662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.723 [2024-07-25 01:29:25.026668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.723 [2024-07-25 01:29:25.026684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.723 qpair failed and we were unable to recover it. 00:29:02.723 [2024-07-25 01:29:25.036585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.723 [2024-07-25 01:29:25.036727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.723 [2024-07-25 01:29:25.036744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.723 [2024-07-25 01:29:25.036752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.723 [2024-07-25 01:29:25.036759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.723 [2024-07-25 01:29:25.036776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.723 qpair failed and we were unable to recover it. 00:29:02.723 [2024-07-25 01:29:25.046565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.723 [2024-07-25 01:29:25.046703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.723 [2024-07-25 01:29:25.046720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.723 [2024-07-25 01:29:25.046727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.723 [2024-07-25 01:29:25.046737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.723 [2024-07-25 01:29:25.046754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.723 qpair failed and we were unable to recover it. 00:29:02.723 [2024-07-25 01:29:25.056556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.723 [2024-07-25 01:29:25.056698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.723 [2024-07-25 01:29:25.056715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.723 [2024-07-25 01:29:25.056722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.723 [2024-07-25 01:29:25.056729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.724 [2024-07-25 01:29:25.056745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.724 qpair failed and we were unable to recover it. 00:29:02.724 [2024-07-25 01:29:25.066649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.724 [2024-07-25 01:29:25.066805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.724 [2024-07-25 01:29:25.066823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.724 [2024-07-25 01:29:25.066830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.724 [2024-07-25 01:29:25.066836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.724 [2024-07-25 01:29:25.066854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.724 qpair failed and we were unable to recover it. 00:29:02.724 [2024-07-25 01:29:25.076625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.724 [2024-07-25 01:29:25.076762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.724 [2024-07-25 01:29:25.076779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.724 [2024-07-25 01:29:25.076786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.724 [2024-07-25 01:29:25.076793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.724 [2024-07-25 01:29:25.076810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.724 qpair failed and we were unable to recover it. 00:29:02.724 [2024-07-25 01:29:25.086589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.724 [2024-07-25 01:29:25.086723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.724 [2024-07-25 01:29:25.086740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.724 [2024-07-25 01:29:25.086747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.724 [2024-07-25 01:29:25.086754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.724 [2024-07-25 01:29:25.086771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.724 qpair failed and we were unable to recover it. 00:29:02.724 [2024-07-25 01:29:25.096675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.724 [2024-07-25 01:29:25.096815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.724 [2024-07-25 01:29:25.096832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.724 [2024-07-25 01:29:25.096840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.724 [2024-07-25 01:29:25.096846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.724 [2024-07-25 01:29:25.096863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.724 qpair failed and we were unable to recover it. 00:29:02.724 [2024-07-25 01:29:25.106773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.724 [2024-07-25 01:29:25.106912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.724 [2024-07-25 01:29:25.106928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.724 [2024-07-25 01:29:25.106936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.724 [2024-07-25 01:29:25.106942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.724 [2024-07-25 01:29:25.106960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.724 qpair failed and we were unable to recover it. 00:29:02.724 [2024-07-25 01:29:25.116772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.724 [2024-07-25 01:29:25.116917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.724 [2024-07-25 01:29:25.116934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.724 [2024-07-25 01:29:25.116942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.724 [2024-07-25 01:29:25.116948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.724 [2024-07-25 01:29:25.116964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.724 qpair failed and we were unable to recover it. 00:29:02.724 [2024-07-25 01:29:25.126800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.724 [2024-07-25 01:29:25.127142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.724 [2024-07-25 01:29:25.127161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.724 [2024-07-25 01:29:25.127168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.724 [2024-07-25 01:29:25.127175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.724 [2024-07-25 01:29:25.127192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.724 qpair failed and we were unable to recover it. 00:29:02.724 [2024-07-25 01:29:25.136828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.724 [2024-07-25 01:29:25.136970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.724 [2024-07-25 01:29:25.136987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.724 [2024-07-25 01:29:25.136995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.724 [2024-07-25 01:29:25.137004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.724 [2024-07-25 01:29:25.137021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.724 qpair failed and we were unable to recover it. 00:29:02.724 [2024-07-25 01:29:25.146837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.724 [2024-07-25 01:29:25.146972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.724 [2024-07-25 01:29:25.146989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.724 [2024-07-25 01:29:25.146996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.724 [2024-07-25 01:29:25.147003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.724 [2024-07-25 01:29:25.147020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.724 qpair failed and we were unable to recover it. 00:29:02.724 [2024-07-25 01:29:25.156801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.724 [2024-07-25 01:29:25.156985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.724 [2024-07-25 01:29:25.157003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.724 [2024-07-25 01:29:25.157011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.724 [2024-07-25 01:29:25.157017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.724 [2024-07-25 01:29:25.157035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.724 qpair failed and we were unable to recover it. 00:29:02.724 [2024-07-25 01:29:25.166898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.724 [2024-07-25 01:29:25.167038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.724 [2024-07-25 01:29:25.167061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.724 [2024-07-25 01:29:25.167068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.724 [2024-07-25 01:29:25.167075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.724 [2024-07-25 01:29:25.167092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.724 qpair failed and we were unable to recover it. 00:29:02.724 [2024-07-25 01:29:25.176873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.724 [2024-07-25 01:29:25.177012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.724 [2024-07-25 01:29:25.177031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.724 [2024-07-25 01:29:25.177039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.724 [2024-07-25 01:29:25.177052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.724 [2024-07-25 01:29:25.177070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.724 qpair failed and we were unable to recover it. 00:29:02.724 [2024-07-25 01:29:25.186966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.724 [2024-07-25 01:29:25.187110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.724 [2024-07-25 01:29:25.187127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.724 [2024-07-25 01:29:25.187135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.724 [2024-07-25 01:29:25.187141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.724 [2024-07-25 01:29:25.187158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.724 qpair failed and we were unable to recover it. 00:29:02.725 [2024-07-25 01:29:25.196983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.725 [2024-07-25 01:29:25.197128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.725 [2024-07-25 01:29:25.197146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.725 [2024-07-25 01:29:25.197153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.725 [2024-07-25 01:29:25.197160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.725 [2024-07-25 01:29:25.197176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.725 qpair failed and we were unable to recover it. 00:29:02.725 [2024-07-25 01:29:25.207009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.725 [2024-07-25 01:29:25.207354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.725 [2024-07-25 01:29:25.207373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.725 [2024-07-25 01:29:25.207380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.725 [2024-07-25 01:29:25.207387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.725 [2024-07-25 01:29:25.207403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.725 qpair failed and we were unable to recover it. 00:29:02.985 [2024-07-25 01:29:25.217056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.985 [2024-07-25 01:29:25.217195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.985 [2024-07-25 01:29:25.217213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.985 [2024-07-25 01:29:25.217220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.985 [2024-07-25 01:29:25.217227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.985 [2024-07-25 01:29:25.217244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.985 qpair failed and we were unable to recover it. 00:29:02.985 [2024-07-25 01:29:25.227255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.985 [2024-07-25 01:29:25.227397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.985 [2024-07-25 01:29:25.227413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.985 [2024-07-25 01:29:25.227425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.985 [2024-07-25 01:29:25.227431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.985 [2024-07-25 01:29:25.227448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.985 qpair failed and we were unable to recover it. 00:29:02.985 [2024-07-25 01:29:25.237097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.985 [2024-07-25 01:29:25.237240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.985 [2024-07-25 01:29:25.237257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.985 [2024-07-25 01:29:25.237265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.985 [2024-07-25 01:29:25.237272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.985 [2024-07-25 01:29:25.237288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.985 qpair failed and we were unable to recover it. 00:29:02.985 [2024-07-25 01:29:25.247119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.985 [2024-07-25 01:29:25.247255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.985 [2024-07-25 01:29:25.247272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.985 [2024-07-25 01:29:25.247279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.985 [2024-07-25 01:29:25.247285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83ec000b90 00:29:02.985 [2024-07-25 01:29:25.247303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.985 qpair failed and we were unable to recover it. 00:29:02.985 [2024-07-25 01:29:25.257143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.985 [2024-07-25 01:29:25.257296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.985 [2024-07-25 01:29:25.257320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.985 [2024-07-25 01:29:25.257329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.985 [2024-07-25 01:29:25.257336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83e4000b90 00:29:02.985 [2024-07-25 01:29:25.257356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.985 qpair failed and we were unable to recover it. 00:29:02.985 [2024-07-25 01:29:25.267180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.985 [2024-07-25 01:29:25.267318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.985 [2024-07-25 01:29:25.267337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.985 [2024-07-25 01:29:25.267345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.985 [2024-07-25 01:29:25.267352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83e4000b90 00:29:02.985 [2024-07-25 01:29:25.267370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.985 qpair failed and we were unable to recover it. 00:29:02.985 [2024-07-25 01:29:25.267460] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:29:02.985 A controller has encountered a failure and is being reset. 00:29:02.985 [2024-07-25 01:29:25.267553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1384010 (9): Bad file descriptor 00:29:02.985 Controller properly reset. 00:29:02.985 Initializing NVMe Controllers 00:29:02.985 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:02.985 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:02.985 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:02.985 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:02.985 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:02.985 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:02.985 Initialization complete. Launching workers. 00:29:02.985 Starting thread on core 1 00:29:02.985 Starting thread on core 2 00:29:02.985 Starting thread on core 3 00:29:02.985 Starting thread on core 0 00:29:02.985 01:29:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:02.985 00:29:02.985 real 0m11.173s 00:29:02.985 user 0m20.785s 00:29:02.985 sys 0m4.254s 00:29:02.985 01:29:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:02.985 01:29:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:02.985 ************************************ 00:29:02.985 END TEST nvmf_target_disconnect_tc2 00:29:02.985 ************************************ 00:29:02.985 01:29:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:29:02.985 01:29:25 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:02.985 01:29:25 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:02.985 01:29:25 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:02.985 01:29:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:02.985 01:29:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:29:02.985 01:29:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:02.985 01:29:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:29:02.985 01:29:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:02.985 01:29:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:02.985 rmmod nvme_tcp 00:29:02.985 rmmod nvme_fabrics 00:29:02.985 rmmod nvme_keyring 00:29:02.985 01:29:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:02.985 01:29:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:29:02.985 01:29:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:29:02.985 01:29:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1059126 ']' 00:29:02.986 01:29:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1059126 00:29:02.986 01:29:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1059126 ']' 00:29:02.986 01:29:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 1059126 00:29:02.986 01:29:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:29:02.986 01:29:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:02.986 01:29:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1059126 00:29:02.986 01:29:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:29:02.986 01:29:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:29:02.986 01:29:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1059126' 00:29:02.986 killing process with pid 1059126 00:29:02.986 01:29:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 1059126 00:29:02.986 01:29:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 1059126 00:29:03.245 01:29:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:03.245 01:29:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:03.245 01:29:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:03.245 01:29:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:03.245 01:29:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:03.245 01:29:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:03.245 01:29:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:03.245 01:29:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:05.782 01:29:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:05.782 00:29:05.782 real 0m19.277s 00:29:05.782 user 0m47.352s 00:29:05.782 sys 0m8.711s 00:29:05.782 01:29:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:05.782 01:29:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:05.782 ************************************ 00:29:05.782 END TEST nvmf_target_disconnect 00:29:05.782 ************************************ 00:29:05.782 01:29:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:05.782 01:29:27 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:29:05.782 01:29:27 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:05.782 01:29:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:05.782 01:29:27 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:29:05.782 00:29:05.782 real 20m55.424s 00:29:05.782 user 45m6.551s 00:29:05.782 sys 6m17.304s 00:29:05.782 01:29:27 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:05.782 01:29:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:05.782 ************************************ 00:29:05.782 END TEST nvmf_tcp 00:29:05.782 ************************************ 00:29:05.782 01:29:27 -- common/autotest_common.sh@1142 -- # return 0 00:29:05.782 01:29:27 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:29:05.782 01:29:27 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:05.782 01:29:27 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:05.782 01:29:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:05.782 01:29:27 -- common/autotest_common.sh@10 -- # set +x 00:29:05.782 ************************************ 00:29:05.782 START TEST spdkcli_nvmf_tcp 00:29:05.782 ************************************ 00:29:05.782 01:29:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:05.782 * Looking for test storage... 00:29:05.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:29:05.782 01:29:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:29:05.782 01:29:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:29:05.782 01:29:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:29:05.782 01:29:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:05.782 01:29:27 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:29:05.782 01:29:27 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:05.782 01:29:27 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:05.782 01:29:27 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:05.782 01:29:27 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:05.782 01:29:27 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:05.782 01:29:27 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:05.782 01:29:27 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:05.782 01:29:27 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:05.782 01:29:27 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:05.782 01:29:27 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:05.782 01:29:27 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:05.782 01:29:27 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:05.782 01:29:27 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:05.782 01:29:27 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:05.782 01:29:27 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:05.782 01:29:27 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:05.782 01:29:27 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:05.782 01:29:27 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:05.782 01:29:27 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:05.782 01:29:27 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:05.782 01:29:27 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.782 01:29:27 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.783 01:29:27 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.783 01:29:27 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:29:05.783 01:29:27 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.783 01:29:27 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:29:05.783 01:29:27 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:05.783 01:29:27 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:05.783 01:29:27 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:05.783 01:29:27 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:05.783 01:29:27 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:05.783 01:29:27 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:05.783 01:29:27 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:05.783 01:29:27 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:05.783 01:29:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:05.783 01:29:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:05.783 01:29:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:05.783 01:29:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:05.783 01:29:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:05.783 01:29:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:05.783 01:29:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:05.783 01:29:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1060801 00:29:05.783 01:29:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1060801 00:29:05.783 01:29:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 1060801 ']' 00:29:05.783 01:29:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:05.783 01:29:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:05.783 01:29:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:05.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:05.783 01:29:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:05.783 01:29:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:05.783 01:29:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:05.783 [2024-07-25 01:29:28.020820] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:29:05.783 [2024-07-25 01:29:28.020887] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1060801 ] 00:29:05.783 EAL: No free 2048 kB hugepages reported on node 1 00:29:05.783 [2024-07-25 01:29:28.072581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:05.783 [2024-07-25 01:29:28.153671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:05.783 [2024-07-25 01:29:28.153674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:06.353 01:29:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:06.353 01:29:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:29:06.353 01:29:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:06.353 01:29:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:06.353 01:29:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:06.613 01:29:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:06.613 01:29:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:29:06.613 01:29:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:06.613 01:29:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:06.613 01:29:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:06.613 01:29:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:06.613 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:06.613 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:06.613 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:06.613 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:06.613 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:06.613 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:06.613 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:06.613 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:06.613 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:06.613 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:06.613 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:06.613 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:06.613 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:06.613 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:06.613 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:06.613 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:06.613 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:06.613 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:06.613 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:06.613 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:06.613 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:06.613 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:06.613 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:29:06.613 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:06.613 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:06.613 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:06.613 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:06.613 ' 00:29:09.158 [2024-07-25 01:29:31.236627] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:10.098 [2024-07-25 01:29:32.412523] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:29:12.640 [2024-07-25 01:29:34.579079] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:29:14.023 [2024-07-25 01:29:36.440804] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:29:15.406 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:15.406 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:15.406 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:15.406 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:15.406 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:15.406 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:15.406 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:15.406 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:15.406 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:15.406 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:15.406 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:15.406 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:15.406 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:15.406 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:15.406 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:15.406 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:15.406 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:15.406 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:15.406 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:15.406 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:15.406 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:15.406 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:15.406 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:15.406 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:29:15.406 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:15.406 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:15.406 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:15.406 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:15.666 01:29:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:15.666 01:29:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:15.666 01:29:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:15.666 01:29:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:15.666 01:29:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:15.666 01:29:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:15.666 01:29:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:29:15.666 01:29:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:29:15.927 01:29:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:15.927 01:29:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:15.927 01:29:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:15.927 01:29:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:15.927 01:29:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:16.187 01:29:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:16.187 01:29:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:16.187 01:29:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:16.187 01:29:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:16.187 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:16.187 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:16.187 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:16.187 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:29:16.187 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:29:16.187 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:16.187 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:16.187 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:16.187 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:16.187 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:16.187 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:16.187 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:16.187 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:16.187 ' 00:29:21.469 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:29:21.469 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:29:21.469 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:21.469 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:29:21.469 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:29:21.469 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:29:21.469 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:29:21.469 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:21.469 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:29:21.469 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:29:21.469 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:29:21.469 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:29:21.469 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:29:21.469 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:29:21.469 01:29:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:29:21.469 01:29:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:21.469 01:29:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:21.469 01:29:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1060801 00:29:21.469 01:29:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1060801 ']' 00:29:21.469 01:29:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1060801 00:29:21.469 01:29:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:29:21.469 01:29:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:21.469 01:29:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1060801 00:29:21.469 01:29:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:21.469 01:29:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:21.469 01:29:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1060801' 00:29:21.469 killing process with pid 1060801 00:29:21.469 01:29:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 1060801 00:29:21.469 01:29:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 1060801 00:29:21.469 01:29:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:29:21.469 01:29:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:29:21.469 01:29:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1060801 ']' 00:29:21.469 01:29:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1060801 00:29:21.469 01:29:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1060801 ']' 00:29:21.469 01:29:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1060801 00:29:21.470 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1060801) - No such process 00:29:21.470 01:29:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 1060801 is not found' 00:29:21.470 Process with pid 1060801 is not found 00:29:21.470 01:29:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:29:21.470 01:29:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:29:21.470 01:29:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:29:21.470 00:29:21.470 real 0m15.790s 00:29:21.470 user 0m32.740s 00:29:21.470 sys 0m0.718s 00:29:21.470 01:29:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:21.470 01:29:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:21.470 ************************************ 00:29:21.470 END TEST spdkcli_nvmf_tcp 00:29:21.470 ************************************ 00:29:21.470 01:29:43 -- common/autotest_common.sh@1142 -- # return 0 00:29:21.470 01:29:43 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:21.470 01:29:43 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:21.470 01:29:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:21.470 01:29:43 -- common/autotest_common.sh@10 -- # set +x 00:29:21.470 ************************************ 00:29:21.470 START TEST nvmf_identify_passthru 00:29:21.470 ************************************ 00:29:21.470 01:29:43 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:21.470 * Looking for test storage... 00:29:21.470 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:21.470 01:29:43 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:21.470 01:29:43 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:29:21.470 01:29:43 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:21.470 01:29:43 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:21.470 01:29:43 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:21.470 01:29:43 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:21.470 01:29:43 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:21.470 01:29:43 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:21.470 01:29:43 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:21.470 01:29:43 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:21.470 01:29:43 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:21.470 01:29:43 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:21.470 01:29:43 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:21.470 01:29:43 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:21.470 01:29:43 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:21.470 01:29:43 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:21.470 01:29:43 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:21.470 01:29:43 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:21.470 01:29:43 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:21.470 01:29:43 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:21.470 01:29:43 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:21.470 01:29:43 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:21.470 01:29:43 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.470 01:29:43 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.470 01:29:43 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.470 01:29:43 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:21.470 01:29:43 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.470 01:29:43 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:29:21.470 01:29:43 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:21.470 01:29:43 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:21.470 01:29:43 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:21.470 01:29:43 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:21.470 01:29:43 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:21.470 01:29:43 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:21.470 01:29:43 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:21.470 01:29:43 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:21.470 01:29:43 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:21.470 01:29:43 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:21.470 01:29:43 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:21.470 01:29:43 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:21.470 01:29:43 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.470 01:29:43 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.470 01:29:43 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.470 01:29:43 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:21.470 01:29:43 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.470 01:29:43 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:29:21.470 01:29:43 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:21.470 01:29:43 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:21.470 01:29:43 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:21.470 01:29:43 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:21.470 01:29:43 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:21.470 01:29:43 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:21.470 01:29:43 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:21.470 01:29:43 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:21.470 01:29:43 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:21.470 01:29:43 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:21.470 01:29:43 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:29:21.470 01:29:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:26.754 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:26.754 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:26.754 Found net devices under 0000:86:00.0: cvl_0_0 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:26.754 Found net devices under 0000:86:00.1: cvl_0_1 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:26.754 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:27.015 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:27.015 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:27.015 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:27.015 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:27.015 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:27.015 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:27.015 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:27.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:27.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:29:27.015 00:29:27.015 --- 10.0.0.2 ping statistics --- 00:29:27.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:27.015 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:29:27.015 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:27.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:27.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.381 ms 00:29:27.015 00:29:27.015 --- 10.0.0.1 ping statistics --- 00:29:27.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:27.015 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:29:27.015 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:27.015 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:29:27.015 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:27.015 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:27.015 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:27.015 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:27.015 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:27.015 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:27.015 01:29:49 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:27.015 01:29:49 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:29:27.015 01:29:49 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:27.015 01:29:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:27.015 01:29:49 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:29:27.015 01:29:49 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:29:27.015 01:29:49 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:29:27.015 01:29:49 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:29:27.015 01:29:49 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:29:27.015 01:29:49 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:29:27.015 01:29:49 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:29:27.015 01:29:49 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:27.015 01:29:49 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:27.015 01:29:49 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:29:27.015 01:29:49 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:29:27.015 01:29:49 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:29:27.015 01:29:49 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:5e:00.0 00:29:27.015 01:29:49 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:29:27.015 01:29:49 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:29:27.015 01:29:49 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:29:27.015 01:29:49 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:29:27.015 01:29:49 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:29:27.275 EAL: No free 2048 kB hugepages reported on node 1 00:29:31.470 01:29:53 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:29:31.470 01:29:53 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:29:31.470 01:29:53 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:29:31.470 01:29:53 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:29:31.470 EAL: No free 2048 kB hugepages reported on node 1 00:29:35.668 01:29:57 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:29:35.668 01:29:57 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:29:35.668 01:29:57 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:35.668 01:29:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:35.668 01:29:57 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:29:35.668 01:29:57 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:35.668 01:29:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:35.668 01:29:57 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1067780 00:29:35.668 01:29:57 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:35.668 01:29:57 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:35.668 01:29:57 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1067780 00:29:35.668 01:29:57 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 1067780 ']' 00:29:35.668 01:29:57 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:35.668 01:29:57 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:35.668 01:29:57 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:35.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:35.668 01:29:57 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:35.668 01:29:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:35.668 [2024-07-25 01:29:57.806606] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:29:35.668 [2024-07-25 01:29:57.806650] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:35.668 EAL: No free 2048 kB hugepages reported on node 1 00:29:35.668 [2024-07-25 01:29:57.873049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:35.668 [2024-07-25 01:29:57.971092] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:35.668 [2024-07-25 01:29:57.971129] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:35.668 [2024-07-25 01:29:57.971136] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:35.668 [2024-07-25 01:29:57.971141] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:35.668 [2024-07-25 01:29:57.971146] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:35.668 [2024-07-25 01:29:57.971228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:35.668 [2024-07-25 01:29:57.971316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:35.668 [2024-07-25 01:29:57.971404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:35.668 [2024-07-25 01:29:57.971405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:36.237 01:29:58 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:36.237 01:29:58 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:29:36.237 01:29:58 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:29:36.237 01:29:58 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.237 01:29:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:36.237 INFO: Log level set to 20 00:29:36.237 INFO: Requests: 00:29:36.237 { 00:29:36.237 "jsonrpc": "2.0", 00:29:36.237 "method": "nvmf_set_config", 00:29:36.237 "id": 1, 00:29:36.237 "params": { 00:29:36.237 "admin_cmd_passthru": { 00:29:36.237 "identify_ctrlr": true 00:29:36.237 } 00:29:36.237 } 00:29:36.237 } 00:29:36.237 00:29:36.237 INFO: response: 00:29:36.237 { 00:29:36.237 "jsonrpc": "2.0", 00:29:36.237 "id": 1, 00:29:36.237 "result": true 00:29:36.237 } 00:29:36.237 00:29:36.237 01:29:58 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.237 01:29:58 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:29:36.237 01:29:58 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.237 01:29:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:36.237 INFO: Setting log level to 20 00:29:36.237 INFO: Setting log level to 20 00:29:36.237 INFO: Log level set to 20 00:29:36.237 INFO: Log level set to 20 00:29:36.237 INFO: Requests: 00:29:36.237 { 00:29:36.237 "jsonrpc": "2.0", 00:29:36.237 "method": "framework_start_init", 00:29:36.237 "id": 1 00:29:36.237 } 00:29:36.237 00:29:36.237 INFO: Requests: 00:29:36.237 { 00:29:36.237 "jsonrpc": "2.0", 00:29:36.237 "method": "framework_start_init", 00:29:36.237 "id": 1 00:29:36.237 } 00:29:36.237 00:29:36.497 [2024-07-25 01:29:58.744510] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:29:36.497 INFO: response: 00:29:36.497 { 00:29:36.497 "jsonrpc": "2.0", 00:29:36.497 "id": 1, 00:29:36.497 "result": true 00:29:36.497 } 00:29:36.497 00:29:36.497 INFO: response: 00:29:36.497 { 00:29:36.497 "jsonrpc": "2.0", 00:29:36.497 "id": 1, 00:29:36.497 "result": true 00:29:36.497 } 00:29:36.497 00:29:36.497 01:29:58 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.497 01:29:58 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:36.497 01:29:58 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.497 01:29:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:36.497 INFO: Setting log level to 40 00:29:36.497 INFO: Setting log level to 40 00:29:36.497 INFO: Setting log level to 40 00:29:36.497 [2024-07-25 01:29:58.757831] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:36.497 01:29:58 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.497 01:29:58 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:29:36.497 01:29:58 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:36.497 01:29:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:36.497 01:29:58 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:29:36.497 01:29:58 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.497 01:29:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:39.871 Nvme0n1 00:29:39.871 01:30:01 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.871 01:30:01 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:29:39.871 01:30:01 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.871 01:30:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:39.871 01:30:01 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.871 01:30:01 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:39.871 01:30:01 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.871 01:30:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:39.871 01:30:01 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.871 01:30:01 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:39.871 01:30:01 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.871 01:30:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:39.871 [2024-07-25 01:30:01.656234] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:39.871 01:30:01 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.871 01:30:01 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:29:39.871 01:30:01 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.871 01:30:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:39.871 [ 00:29:39.871 { 00:29:39.871 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:39.871 "subtype": "Discovery", 00:29:39.871 "listen_addresses": [], 00:29:39.871 "allow_any_host": true, 00:29:39.871 "hosts": [] 00:29:39.871 }, 00:29:39.871 { 00:29:39.871 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:39.871 "subtype": "NVMe", 00:29:39.871 "listen_addresses": [ 00:29:39.871 { 00:29:39.871 "trtype": "TCP", 00:29:39.871 "adrfam": "IPv4", 00:29:39.871 "traddr": "10.0.0.2", 00:29:39.871 "trsvcid": "4420" 00:29:39.871 } 00:29:39.871 ], 00:29:39.871 "allow_any_host": true, 00:29:39.871 "hosts": [], 00:29:39.871 "serial_number": "SPDK00000000000001", 00:29:39.871 "model_number": "SPDK bdev Controller", 00:29:39.871 "max_namespaces": 1, 00:29:39.871 "min_cntlid": 1, 00:29:39.871 "max_cntlid": 65519, 00:29:39.871 "namespaces": [ 00:29:39.871 { 00:29:39.871 "nsid": 1, 00:29:39.871 "bdev_name": "Nvme0n1", 00:29:39.871 "name": "Nvme0n1", 00:29:39.871 "nguid": "3619D2C06606420B83D69F9641F78629", 00:29:39.871 "uuid": "3619d2c0-6606-420b-83d6-9f9641f78629" 00:29:39.871 } 00:29:39.871 ] 00:29:39.871 } 00:29:39.871 ] 00:29:39.871 01:30:01 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.871 01:30:01 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:39.871 01:30:01 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:29:39.871 01:30:01 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:29:39.871 EAL: No free 2048 kB hugepages reported on node 1 00:29:39.871 01:30:01 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:29:39.871 01:30:01 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:39.871 01:30:01 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:29:39.871 01:30:01 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:29:39.871 EAL: No free 2048 kB hugepages reported on node 1 00:29:39.871 01:30:01 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:29:39.871 01:30:01 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:29:39.871 01:30:01 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:29:39.871 01:30:01 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:39.871 01:30:01 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.871 01:30:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:39.871 01:30:01 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.871 01:30:01 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:29:39.871 01:30:01 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:29:39.871 01:30:01 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:39.871 01:30:01 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:29:39.871 01:30:01 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:39.871 01:30:01 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:29:39.871 01:30:01 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:39.871 01:30:01 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:39.871 rmmod nvme_tcp 00:29:39.871 rmmod nvme_fabrics 00:29:39.871 rmmod nvme_keyring 00:29:39.871 01:30:01 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:39.871 01:30:01 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:29:39.871 01:30:01 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:29:39.871 01:30:01 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1067780 ']' 00:29:39.871 01:30:01 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1067780 00:29:39.871 01:30:01 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 1067780 ']' 00:29:39.871 01:30:01 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 1067780 00:29:39.871 01:30:01 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:29:39.871 01:30:01 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:39.871 01:30:01 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1067780 00:29:39.871 01:30:02 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:39.871 01:30:02 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:39.871 01:30:02 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1067780' 00:29:39.871 killing process with pid 1067780 00:29:39.871 01:30:02 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 1067780 00:29:39.871 01:30:02 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 1067780 00:29:41.250 01:30:03 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:41.250 01:30:03 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:41.250 01:30:03 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:41.250 01:30:03 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:41.251 01:30:03 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:41.251 01:30:03 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:41.251 01:30:03 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:41.251 01:30:03 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:43.159 01:30:05 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:43.159 00:29:43.159 real 0m21.850s 00:29:43.159 user 0m29.572s 00:29:43.159 sys 0m4.961s 00:29:43.159 01:30:05 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:43.159 01:30:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:43.159 ************************************ 00:29:43.159 END TEST nvmf_identify_passthru 00:29:43.159 ************************************ 00:29:43.159 01:30:05 -- common/autotest_common.sh@1142 -- # return 0 00:29:43.159 01:30:05 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:29:43.159 01:30:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:43.159 01:30:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:43.159 01:30:05 -- common/autotest_common.sh@10 -- # set +x 00:29:43.159 ************************************ 00:29:43.159 START TEST nvmf_dif 00:29:43.159 ************************************ 00:29:43.159 01:30:05 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:29:43.419 * Looking for test storage... 00:29:43.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:43.419 01:30:05 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:43.419 01:30:05 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:29:43.419 01:30:05 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:43.419 01:30:05 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:43.419 01:30:05 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:43.419 01:30:05 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:43.419 01:30:05 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:43.419 01:30:05 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:43.419 01:30:05 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:43.419 01:30:05 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:43.419 01:30:05 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:43.419 01:30:05 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:43.419 01:30:05 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:43.419 01:30:05 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:43.419 01:30:05 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:43.419 01:30:05 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:43.419 01:30:05 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:43.419 01:30:05 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:43.420 01:30:05 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:43.420 01:30:05 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:43.420 01:30:05 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:43.420 01:30:05 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:43.420 01:30:05 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.420 01:30:05 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.420 01:30:05 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.420 01:30:05 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:29:43.420 01:30:05 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.420 01:30:05 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:29:43.420 01:30:05 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:43.420 01:30:05 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:43.420 01:30:05 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:43.420 01:30:05 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:43.420 01:30:05 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:43.420 01:30:05 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:43.420 01:30:05 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:43.420 01:30:05 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:43.420 01:30:05 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:29:43.420 01:30:05 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:29:43.420 01:30:05 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:29:43.420 01:30:05 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:29:43.420 01:30:05 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:29:43.420 01:30:05 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:43.420 01:30:05 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:43.420 01:30:05 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:43.420 01:30:05 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:43.420 01:30:05 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:43.420 01:30:05 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:43.420 01:30:05 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:43.420 01:30:05 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:43.420 01:30:05 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:43.420 01:30:05 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:43.420 01:30:05 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:29:43.420 01:30:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:48.700 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:48.700 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.700 01:30:10 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:48.700 Found net devices under 0000:86:00.0: cvl_0_0 00:29:48.701 01:30:10 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.701 01:30:10 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:48.701 01:30:10 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.701 01:30:10 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:48.701 01:30:10 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.701 01:30:10 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:48.701 01:30:10 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:48.701 01:30:10 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.701 01:30:10 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:48.701 Found net devices under 0000:86:00.1: cvl_0_1 00:29:48.701 01:30:10 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.701 01:30:10 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:48.701 01:30:10 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:29:48.701 01:30:10 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:48.701 01:30:10 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:48.701 01:30:10 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:48.701 01:30:10 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:48.701 01:30:10 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:48.701 01:30:10 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:48.701 01:30:10 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:48.701 01:30:10 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:48.701 01:30:10 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:48.701 01:30:10 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:48.701 01:30:10 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:48.701 01:30:10 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:48.701 01:30:10 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:48.701 01:30:10 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:48.701 01:30:10 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:48.701 01:30:10 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:48.701 01:30:10 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:48.701 01:30:10 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:48.701 01:30:10 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:48.701 01:30:10 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:48.701 01:30:10 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:48.701 01:30:10 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:48.701 01:30:10 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:48.701 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:48.701 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:29:48.701 00:29:48.701 --- 10.0.0.2 ping statistics --- 00:29:48.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.701 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:29:48.701 01:30:10 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:48.701 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:48.701 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.386 ms 00:29:48.701 00:29:48.701 --- 10.0.0.1 ping statistics --- 00:29:48.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.701 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:29:48.701 01:30:10 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:48.701 01:30:10 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:29:48.701 01:30:10 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:29:48.701 01:30:10 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:51.241 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:29:51.241 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:29:51.241 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:29:51.241 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:29:51.241 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:29:51.241 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:29:51.241 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:29:51.241 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:29:51.241 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:29:51.241 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:29:51.241 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:29:51.241 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:29:51.241 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:29:51.241 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:29:51.241 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:29:51.241 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:29:51.241 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:29:51.241 01:30:13 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:51.241 01:30:13 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:51.241 01:30:13 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:51.241 01:30:13 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:51.241 01:30:13 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:51.241 01:30:13 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:51.241 01:30:13 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:29:51.242 01:30:13 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:29:51.242 01:30:13 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:51.242 01:30:13 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:51.242 01:30:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:51.242 01:30:13 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1073804 00:29:51.242 01:30:13 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1073804 00:29:51.242 01:30:13 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 1073804 ']' 00:29:51.242 01:30:13 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:51.242 01:30:13 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:51.242 01:30:13 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:51.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:51.242 01:30:13 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:29:51.242 01:30:13 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:51.242 01:30:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:51.242 [2024-07-25 01:30:13.644509] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:29:51.242 [2024-07-25 01:30:13.644552] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:51.242 EAL: No free 2048 kB hugepages reported on node 1 00:29:51.242 [2024-07-25 01:30:13.700880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.502 [2024-07-25 01:30:13.780472] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:51.502 [2024-07-25 01:30:13.780507] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:51.502 [2024-07-25 01:30:13.780514] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:51.502 [2024-07-25 01:30:13.780520] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:51.502 [2024-07-25 01:30:13.780525] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:51.502 [2024-07-25 01:30:13.780541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.073 01:30:14 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:52.073 01:30:14 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:29:52.073 01:30:14 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:52.073 01:30:14 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:52.073 01:30:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:52.073 01:30:14 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:52.073 01:30:14 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:29:52.073 01:30:14 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:29:52.073 01:30:14 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.073 01:30:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:52.073 [2024-07-25 01:30:14.470247] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:52.073 01:30:14 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.073 01:30:14 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:29:52.073 01:30:14 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:52.073 01:30:14 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:52.073 01:30:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:52.073 ************************************ 00:29:52.073 START TEST fio_dif_1_default 00:29:52.073 ************************************ 00:29:52.073 01:30:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:29:52.073 01:30:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:29:52.073 01:30:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:29:52.073 01:30:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:29:52.073 01:30:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:29:52.073 01:30:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:29:52.073 01:30:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:52.073 01:30:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.073 01:30:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:52.073 bdev_null0 00:29:52.073 01:30:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.073 01:30:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:52.073 01:30:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.073 01:30:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:52.073 01:30:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.073 01:30:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:52.073 01:30:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.074 01:30:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:52.074 01:30:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.074 01:30:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:52.074 01:30:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.074 01:30:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:52.074 [2024-07-25 01:30:14.530509] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:52.074 01:30:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.074 01:30:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:29:52.074 01:30:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:52.074 01:30:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:52.074 01:30:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:29:52.074 01:30:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:29:52.074 01:30:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:52.074 01:30:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:29:52.074 01:30:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:52.074 01:30:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:29:52.074 01:30:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:52.074 01:30:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:52.074 01:30:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:52.074 01:30:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:29:52.074 01:30:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:52.074 01:30:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:29:52.074 01:30:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:52.074 01:30:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:29:52.074 01:30:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:52.074 01:30:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:52.074 { 00:29:52.074 "params": { 00:29:52.074 "name": "Nvme$subsystem", 00:29:52.074 "trtype": "$TEST_TRANSPORT", 00:29:52.074 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:52.074 "adrfam": "ipv4", 00:29:52.074 "trsvcid": "$NVMF_PORT", 00:29:52.074 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:52.074 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:52.074 "hdgst": ${hdgst:-false}, 00:29:52.074 "ddgst": ${ddgst:-false} 00:29:52.074 }, 00:29:52.074 "method": "bdev_nvme_attach_controller" 00:29:52.074 } 00:29:52.074 EOF 00:29:52.074 )") 00:29:52.074 01:30:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:52.074 01:30:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:29:52.074 01:30:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:29:52.074 01:30:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:29:52.074 01:30:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:29:52.074 01:30:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:52.074 01:30:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:29:52.074 01:30:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:29:52.074 01:30:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:52.074 "params": { 00:29:52.074 "name": "Nvme0", 00:29:52.074 "trtype": "tcp", 00:29:52.074 "traddr": "10.0.0.2", 00:29:52.074 "adrfam": "ipv4", 00:29:52.074 "trsvcid": "4420", 00:29:52.074 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:52.074 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:52.074 "hdgst": false, 00:29:52.074 "ddgst": false 00:29:52.074 }, 00:29:52.074 "method": "bdev_nvme_attach_controller" 00:29:52.074 }' 00:29:52.370 01:30:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:52.370 01:30:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:52.370 01:30:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:52.370 01:30:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:52.370 01:30:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:52.370 01:30:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:52.370 01:30:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:52.370 01:30:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:52.370 01:30:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:52.370 01:30:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:52.627 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:52.628 fio-3.35 00:29:52.628 Starting 1 thread 00:29:52.628 EAL: No free 2048 kB hugepages reported on node 1 00:30:04.826 00:30:04.826 filename0: (groupid=0, jobs=1): err= 0: pid=1074181: Thu Jul 25 01:30:25 2024 00:30:04.826 read: IOPS=181, BW=725KiB/s (743kB/s)(7280KiB/10037msec) 00:30:04.826 slat (nsec): min=5887, max=54254, avg=6153.33, stdev=1382.37 00:30:04.826 clat (usec): min=1589, max=44667, avg=22041.48, stdev=20207.96 00:30:04.826 lat (usec): min=1595, max=44699, avg=22047.64, stdev=20207.92 00:30:04.826 clat percentiles (usec): 00:30:04.826 | 1.00th=[ 1713], 5.00th=[ 1729], 10.00th=[ 1729], 20.00th=[ 1745], 00:30:04.826 | 30.00th=[ 1795], 40.00th=[ 1876], 50.00th=[41681], 60.00th=[42206], 00:30:04.826 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:04.826 | 99.00th=[42730], 99.50th=[43254], 99.90th=[44827], 99.95th=[44827], 00:30:04.826 | 99.99th=[44827] 00:30:04.826 bw ( KiB/s): min= 704, max= 768, per=100.00%, avg=726.40, stdev=31.32, samples=20 00:30:04.826 iops : min= 176, max= 192, avg=181.60, stdev= 7.83, samples=20 00:30:04.826 lat (msec) : 2=49.56%, 4=0.33%, 50=50.11% 00:30:04.826 cpu : usr=94.41%, sys=5.33%, ctx=10, majf=0, minf=248 00:30:04.826 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:04.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.826 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.826 issued rwts: total=1820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:04.826 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:04.826 00:30:04.826 Run status group 0 (all jobs): 00:30:04.826 READ: bw=725KiB/s (743kB/s), 725KiB/s-725KiB/s (743kB/s-743kB/s), io=7280KiB (7455kB), run=10037-10037msec 00:30:04.826 01:30:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:30:04.826 01:30:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:30:04.826 01:30:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:30:04.826 01:30:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:04.826 01:30:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:30:04.826 01:30:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:04.826 01:30:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.826 01:30:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:04.826 01:30:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.826 01:30:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:04.826 01:30:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.826 01:30:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:04.826 01:30:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.826 00:30:04.826 real 0m11.099s 00:30:04.826 user 0m16.257s 00:30:04.826 sys 0m0.880s 00:30:04.826 01:30:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:04.826 01:30:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:04.826 ************************************ 00:30:04.826 END TEST fio_dif_1_default 00:30:04.826 ************************************ 00:30:04.826 01:30:25 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:30:04.826 01:30:25 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:30:04.826 01:30:25 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:04.826 01:30:25 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:04.826 01:30:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:04.826 ************************************ 00:30:04.826 START TEST fio_dif_1_multi_subsystems 00:30:04.826 ************************************ 00:30:04.826 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:30:04.826 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:30:04.826 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:30:04.826 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:30:04.826 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:04.826 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:30:04.826 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:30:04.826 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:04.826 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.826 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:04.826 bdev_null0 00:30:04.826 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.826 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:04.826 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.826 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:04.826 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.826 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:04.826 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.826 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:04.826 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.826 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:04.826 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.826 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:04.826 [2024-07-25 01:30:25.699944] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:04.826 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:04.827 bdev_null1 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:04.827 { 00:30:04.827 "params": { 00:30:04.827 "name": "Nvme$subsystem", 00:30:04.827 "trtype": "$TEST_TRANSPORT", 00:30:04.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:04.827 "adrfam": "ipv4", 00:30:04.827 "trsvcid": "$NVMF_PORT", 00:30:04.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:04.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:04.827 "hdgst": ${hdgst:-false}, 00:30:04.827 "ddgst": ${ddgst:-false} 00:30:04.827 }, 00:30:04.827 "method": "bdev_nvme_attach_controller" 00:30:04.827 } 00:30:04.827 EOF 00:30:04.827 )") 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:04.827 { 00:30:04.827 "params": { 00:30:04.827 "name": "Nvme$subsystem", 00:30:04.827 "trtype": "$TEST_TRANSPORT", 00:30:04.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:04.827 "adrfam": "ipv4", 00:30:04.827 "trsvcid": "$NVMF_PORT", 00:30:04.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:04.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:04.827 "hdgst": ${hdgst:-false}, 00:30:04.827 "ddgst": ${ddgst:-false} 00:30:04.827 }, 00:30:04.827 "method": "bdev_nvme_attach_controller" 00:30:04.827 } 00:30:04.827 EOF 00:30:04.827 )") 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:04.827 "params": { 00:30:04.827 "name": "Nvme0", 00:30:04.827 "trtype": "tcp", 00:30:04.827 "traddr": "10.0.0.2", 00:30:04.827 "adrfam": "ipv4", 00:30:04.827 "trsvcid": "4420", 00:30:04.827 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:04.827 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:04.827 "hdgst": false, 00:30:04.827 "ddgst": false 00:30:04.827 }, 00:30:04.827 "method": "bdev_nvme_attach_controller" 00:30:04.827 },{ 00:30:04.827 "params": { 00:30:04.827 "name": "Nvme1", 00:30:04.827 "trtype": "tcp", 00:30:04.827 "traddr": "10.0.0.2", 00:30:04.827 "adrfam": "ipv4", 00:30:04.827 "trsvcid": "4420", 00:30:04.827 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:04.827 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:04.827 "hdgst": false, 00:30:04.827 "ddgst": false 00:30:04.827 }, 00:30:04.827 "method": "bdev_nvme_attach_controller" 00:30:04.827 }' 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:04.827 01:30:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:04.827 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:04.827 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:04.827 fio-3.35 00:30:04.827 Starting 2 threads 00:30:04.827 EAL: No free 2048 kB hugepages reported on node 1 00:30:14.890 00:30:14.890 filename0: (groupid=0, jobs=1): err= 0: pid=1076155: Thu Jul 25 01:30:36 2024 00:30:14.890 read: IOPS=94, BW=379KiB/s (388kB/s)(3792KiB/10006msec) 00:30:14.890 slat (nsec): min=4205, max=20800, avg=8231.74, stdev=2830.03 00:30:14.890 clat (usec): min=41820, max=47753, avg=42193.96, stdev=539.35 00:30:14.890 lat (usec): min=41826, max=47773, avg=42202.20, stdev=539.69 00:30:14.890 clat percentiles (usec): 00:30:14.890 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:30:14.890 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:30:14.890 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[43254], 00:30:14.890 | 99.00th=[43254], 99.50th=[43779], 99.90th=[47973], 99.95th=[47973], 00:30:14.890 | 99.99th=[47973] 00:30:14.890 bw ( KiB/s): min= 352, max= 384, per=49.94%, avg=378.95, stdev=11.99, samples=19 00:30:14.890 iops : min= 88, max= 96, avg=94.74, stdev= 3.00, samples=19 00:30:14.890 lat (msec) : 50=100.00% 00:30:14.890 cpu : usr=97.75%, sys=1.99%, ctx=9, majf=0, minf=35 00:30:14.890 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:14.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:14.890 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:14.890 issued rwts: total=948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:14.890 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:14.890 filename1: (groupid=0, jobs=1): err= 0: pid=1076156: Thu Jul 25 01:30:36 2024 00:30:14.890 read: IOPS=94, BW=379KiB/s (388kB/s)(3808KiB/10041msec) 00:30:14.890 slat (nsec): min=6023, max=43220, avg=8183.89, stdev=2965.04 00:30:14.890 clat (usec): min=41787, max=45473, avg=42163.07, stdev=425.61 00:30:14.890 lat (usec): min=41793, max=45501, avg=42171.25, stdev=426.17 00:30:14.890 clat percentiles (usec): 00:30:14.890 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:30:14.890 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:30:14.890 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:30:14.890 | 99.00th=[43254], 99.50th=[43254], 99.90th=[45351], 99.95th=[45351], 00:30:14.890 | 99.99th=[45351] 00:30:14.890 bw ( KiB/s): min= 352, max= 384, per=50.07%, avg=379.20, stdev=11.72, samples=20 00:30:14.890 iops : min= 88, max= 96, avg=94.80, stdev= 2.93, samples=20 00:30:14.890 lat (msec) : 50=100.00% 00:30:14.890 cpu : usr=97.41%, sys=2.32%, ctx=14, majf=0, minf=201 00:30:14.890 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:14.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:14.890 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:14.890 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:14.890 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:14.890 00:30:14.890 Run status group 0 (all jobs): 00:30:14.890 READ: bw=757KiB/s (775kB/s), 379KiB/s-379KiB/s (388kB/s-388kB/s), io=7600KiB (7782kB), run=10006-10041msec 00:30:14.890 01:30:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:30:14.890 01:30:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:30:14.890 01:30:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:14.890 01:30:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:14.890 01:30:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:30:14.890 01:30:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:14.890 01:30:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.890 01:30:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:14.890 01:30:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:14.890 01:30:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:14.890 01:30:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.890 01:30:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:14.890 01:30:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:14.890 01:30:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:14.890 01:30:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:14.890 01:30:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:30:14.890 01:30:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:14.890 01:30:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.890 01:30:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:14.890 01:30:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:14.890 01:30:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:14.890 01:30:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.890 01:30:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:14.890 01:30:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:14.890 00:30:14.890 real 0m11.289s 00:30:14.890 user 0m26.020s 00:30:14.890 sys 0m0.813s 00:30:14.890 01:30:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:14.890 01:30:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:14.890 ************************************ 00:30:14.890 END TEST fio_dif_1_multi_subsystems 00:30:14.890 ************************************ 00:30:14.890 01:30:36 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:30:14.890 01:30:36 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:30:14.890 01:30:36 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:14.890 01:30:36 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:14.890 01:30:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:14.890 ************************************ 00:30:14.890 START TEST fio_dif_rand_params 00:30:14.890 ************************************ 00:30:14.890 01:30:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:30:14.890 01:30:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:30:14.890 01:30:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:30:14.890 01:30:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:30:14.890 01:30:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:30:14.890 01:30:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:30:14.890 01:30:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:30:14.890 01:30:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:30:14.890 01:30:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:30:14.890 01:30:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:14.890 01:30:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:14.890 01:30:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:14.890 01:30:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:14.890 01:30:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:14.890 01:30:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.890 01:30:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:14.890 bdev_null0 00:30:14.890 01:30:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:14.890 01:30:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:14.890 01:30:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.890 01:30:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:14.890 01:30:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:14.890 01:30:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:14.890 01:30:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.890 01:30:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:14.890 01:30:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:14.890 01:30:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:14.890 01:30:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.890 01:30:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:14.890 [2024-07-25 01:30:37.053222] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:14.890 01:30:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:14.890 01:30:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:30:14.890 01:30:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:30:14.890 01:30:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:14.890 01:30:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:14.890 01:30:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:14.890 01:30:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:14.890 01:30:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:14.890 01:30:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:14.890 01:30:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:14.891 01:30:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:14.891 01:30:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:14.891 01:30:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:14.891 01:30:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:14.891 01:30:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:14.891 01:30:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:14.891 { 00:30:14.891 "params": { 00:30:14.891 "name": "Nvme$subsystem", 00:30:14.891 "trtype": "$TEST_TRANSPORT", 00:30:14.891 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:14.891 "adrfam": "ipv4", 00:30:14.891 "trsvcid": "$NVMF_PORT", 00:30:14.891 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:14.891 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:14.891 "hdgst": ${hdgst:-false}, 00:30:14.891 "ddgst": ${ddgst:-false} 00:30:14.891 }, 00:30:14.891 "method": "bdev_nvme_attach_controller" 00:30:14.891 } 00:30:14.891 EOF 00:30:14.891 )") 00:30:14.891 01:30:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:14.891 01:30:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:14.891 01:30:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:14.891 01:30:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:14.891 01:30:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:14.891 01:30:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:14.891 01:30:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:14.891 01:30:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:14.891 01:30:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:14.891 01:30:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:14.891 01:30:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:14.891 01:30:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:14.891 01:30:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:14.891 "params": { 00:30:14.891 "name": "Nvme0", 00:30:14.891 "trtype": "tcp", 00:30:14.891 "traddr": "10.0.0.2", 00:30:14.891 "adrfam": "ipv4", 00:30:14.891 "trsvcid": "4420", 00:30:14.891 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:14.891 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:14.891 "hdgst": false, 00:30:14.891 "ddgst": false 00:30:14.891 }, 00:30:14.891 "method": "bdev_nvme_attach_controller" 00:30:14.891 }' 00:30:14.891 01:30:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:14.891 01:30:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:14.891 01:30:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:14.891 01:30:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:14.891 01:30:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:14.891 01:30:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:14.891 01:30:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:14.891 01:30:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:14.891 01:30:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:14.891 01:30:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:15.149 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:15.149 ... 00:30:15.149 fio-3.35 00:30:15.149 Starting 3 threads 00:30:15.149 EAL: No free 2048 kB hugepages reported on node 1 00:30:21.713 00:30:21.713 filename0: (groupid=0, jobs=1): err= 0: pid=1078045: Thu Jul 25 01:30:42 2024 00:30:21.713 read: IOPS=208, BW=26.1MiB/s (27.3MB/s)(131MiB/5004msec) 00:30:21.713 slat (nsec): min=6237, max=25914, avg=8896.12, stdev=2784.74 00:30:21.713 clat (usec): min=5572, max=59420, avg=14362.86, stdev=14053.78 00:30:21.713 lat (usec): min=5579, max=59433, avg=14371.75, stdev=14053.88 00:30:21.713 clat percentiles (usec): 00:30:21.713 | 1.00th=[ 5997], 5.00th=[ 6456], 10.00th=[ 6783], 20.00th=[ 7373], 00:30:21.713 | 30.00th=[ 7898], 40.00th=[ 8455], 50.00th=[ 9110], 60.00th=[ 9896], 00:30:21.713 | 70.00th=[10814], 80.00th=[12387], 90.00th=[49546], 95.00th=[51119], 00:30:21.713 | 99.00th=[53740], 99.50th=[57410], 99.90th=[59507], 99.95th=[59507], 00:30:21.713 | 99.99th=[59507] 00:30:21.713 bw ( KiB/s): min=15360, max=39168, per=31.88%, avg=26675.20, stdev=7598.79, samples=10 00:30:21.713 iops : min= 120, max= 306, avg=208.40, stdev=59.37, samples=10 00:30:21.713 lat (msec) : 10=61.78%, 20=25.86%, 50=3.83%, 100=8.52% 00:30:21.713 cpu : usr=95.38%, sys=3.90%, ctx=8, majf=0, minf=43 00:30:21.713 IO depths : 1=4.9%, 2=95.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:21.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.713 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.713 issued rwts: total=1044,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:21.713 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:21.713 filename0: (groupid=0, jobs=1): err= 0: pid=1078046: Thu Jul 25 01:30:42 2024 00:30:21.713 read: IOPS=185, BW=23.2MiB/s (24.4MB/s)(116MiB/5002msec) 00:30:21.713 slat (nsec): min=6236, max=25874, avg=9038.67, stdev=2849.53 00:30:21.713 clat (usec): min=5982, max=59115, avg=16120.56, stdev=15505.06 00:30:21.713 lat (usec): min=5989, max=59129, avg=16129.60, stdev=15505.15 00:30:21.713 clat percentiles (usec): 00:30:21.713 | 1.00th=[ 6325], 5.00th=[ 6783], 10.00th=[ 7242], 20.00th=[ 7767], 00:30:21.713 | 30.00th=[ 8291], 40.00th=[ 8979], 50.00th=[ 9634], 60.00th=[10290], 00:30:21.713 | 70.00th=[11731], 80.00th=[14353], 90.00th=[51119], 95.00th=[52167], 00:30:21.713 | 99.00th=[56361], 99.50th=[56886], 99.90th=[58983], 99.95th=[58983], 00:30:21.713 | 99.99th=[58983] 00:30:21.713 bw ( KiB/s): min=19200, max=29184, per=27.95%, avg=23381.33, stdev=3283.39, samples=9 00:30:21.713 iops : min= 150, max= 228, avg=182.67, stdev=25.65, samples=9 00:30:21.713 lat (msec) : 10=56.56%, 20=27.85%, 50=2.37%, 100=13.23% 00:30:21.713 cpu : usr=95.36%, sys=4.08%, ctx=10, majf=0, minf=76 00:30:21.713 IO depths : 1=4.9%, 2=95.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:21.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.713 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.713 issued rwts: total=930,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:21.713 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:21.713 filename0: (groupid=0, jobs=1): err= 0: pid=1078047: Thu Jul 25 01:30:42 2024 00:30:21.713 read: IOPS=259, BW=32.5MiB/s (34.0MB/s)(163MiB/5012msec) 00:30:21.713 slat (nsec): min=6255, max=28218, avg=9094.62, stdev=2807.06 00:30:21.713 clat (usec): min=5007, max=61138, avg=11525.94, stdev=11161.50 00:30:21.713 lat (usec): min=5014, max=61163, avg=11535.04, stdev=11161.72 00:30:21.713 clat percentiles (usec): 00:30:21.713 | 1.00th=[ 5800], 5.00th=[ 6259], 10.00th=[ 6521], 20.00th=[ 6980], 00:30:21.713 | 30.00th=[ 7373], 40.00th=[ 7701], 50.00th=[ 8094], 60.00th=[ 8586], 00:30:21.713 | 70.00th=[ 9241], 80.00th=[10683], 90.00th=[15008], 95.00th=[50070], 00:30:21.713 | 99.00th=[56361], 99.50th=[57410], 99.90th=[61080], 99.95th=[61080], 00:30:21.713 | 99.99th=[61080] 00:30:21.713 bw ( KiB/s): min=24576, max=44800, per=39.75%, avg=33260.40, stdev=7154.22, samples=10 00:30:21.713 iops : min= 192, max= 350, avg=259.80, stdev=55.92, samples=10 00:30:21.713 lat (msec) : 10=74.96%, 20=18.59%, 50=1.08%, 100=5.38% 00:30:21.713 cpu : usr=94.71%, sys=4.51%, ctx=8, majf=0, minf=148 00:30:21.713 IO depths : 1=3.2%, 2=96.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:21.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.713 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.713 issued rwts: total=1302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:21.713 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:21.713 00:30:21.713 Run status group 0 (all jobs): 00:30:21.713 READ: bw=81.7MiB/s (85.7MB/s), 23.2MiB/s-32.5MiB/s (24.4MB/s-34.0MB/s), io=410MiB (429MB), run=5002-5012msec 00:30:21.713 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:30:21.713 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:21.713 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:21.713 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:21.713 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:21.713 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:21.713 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.713 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:21.713 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.713 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:21.713 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.713 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:21.713 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.713 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:30:21.713 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:30:21.713 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:30:21.713 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:30:21.713 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:30:21.713 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:30:21.713 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:30:21.713 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:21.713 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:21.713 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:21.713 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:21.713 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:30:21.713 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.713 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:21.713 bdev_null0 00:30:21.713 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.713 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:21.713 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.713 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:21.713 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.713 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:21.713 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.713 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:21.713 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.713 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:21.713 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.713 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:21.714 [2024-07-25 01:30:43.137287] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:21.714 bdev_null1 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:21.714 bdev_null2 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:21.714 { 00:30:21.714 "params": { 00:30:21.714 "name": "Nvme$subsystem", 00:30:21.714 "trtype": "$TEST_TRANSPORT", 00:30:21.714 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:21.714 "adrfam": "ipv4", 00:30:21.714 "trsvcid": "$NVMF_PORT", 00:30:21.714 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:21.714 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:21.714 "hdgst": ${hdgst:-false}, 00:30:21.714 "ddgst": ${ddgst:-false} 00:30:21.714 }, 00:30:21.714 "method": "bdev_nvme_attach_controller" 00:30:21.714 } 00:30:21.714 EOF 00:30:21.714 )") 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:21.714 { 00:30:21.714 "params": { 00:30:21.714 "name": "Nvme$subsystem", 00:30:21.714 "trtype": "$TEST_TRANSPORT", 00:30:21.714 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:21.714 "adrfam": "ipv4", 00:30:21.714 "trsvcid": "$NVMF_PORT", 00:30:21.714 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:21.714 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:21.714 "hdgst": ${hdgst:-false}, 00:30:21.714 "ddgst": ${ddgst:-false} 00:30:21.714 }, 00:30:21.714 "method": "bdev_nvme_attach_controller" 00:30:21.714 } 00:30:21.714 EOF 00:30:21.714 )") 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:21.714 { 00:30:21.714 "params": { 00:30:21.714 "name": "Nvme$subsystem", 00:30:21.714 "trtype": "$TEST_TRANSPORT", 00:30:21.714 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:21.714 "adrfam": "ipv4", 00:30:21.714 "trsvcid": "$NVMF_PORT", 00:30:21.714 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:21.714 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:21.714 "hdgst": ${hdgst:-false}, 00:30:21.714 "ddgst": ${ddgst:-false} 00:30:21.714 }, 00:30:21.714 "method": "bdev_nvme_attach_controller" 00:30:21.714 } 00:30:21.714 EOF 00:30:21.714 )") 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:21.714 01:30:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:21.714 "params": { 00:30:21.714 "name": "Nvme0", 00:30:21.714 "trtype": "tcp", 00:30:21.714 "traddr": "10.0.0.2", 00:30:21.714 "adrfam": "ipv4", 00:30:21.714 "trsvcid": "4420", 00:30:21.714 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:21.714 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:21.714 "hdgst": false, 00:30:21.714 "ddgst": false 00:30:21.714 }, 00:30:21.714 "method": "bdev_nvme_attach_controller" 00:30:21.714 },{ 00:30:21.714 "params": { 00:30:21.714 "name": "Nvme1", 00:30:21.714 "trtype": "tcp", 00:30:21.714 "traddr": "10.0.0.2", 00:30:21.714 "adrfam": "ipv4", 00:30:21.714 "trsvcid": "4420", 00:30:21.714 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:21.714 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:21.714 "hdgst": false, 00:30:21.714 "ddgst": false 00:30:21.714 }, 00:30:21.714 "method": "bdev_nvme_attach_controller" 00:30:21.714 },{ 00:30:21.714 "params": { 00:30:21.714 "name": "Nvme2", 00:30:21.714 "trtype": "tcp", 00:30:21.714 "traddr": "10.0.0.2", 00:30:21.715 "adrfam": "ipv4", 00:30:21.715 "trsvcid": "4420", 00:30:21.715 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:21.715 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:21.715 "hdgst": false, 00:30:21.715 "ddgst": false 00:30:21.715 }, 00:30:21.715 "method": "bdev_nvme_attach_controller" 00:30:21.715 }' 00:30:21.715 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:21.715 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:21.715 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:21.715 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:21.715 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:21.715 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:21.715 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:21.715 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:21.715 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:21.715 01:30:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:21.715 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:21.715 ... 00:30:21.715 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:21.715 ... 00:30:21.715 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:21.715 ... 00:30:21.715 fio-3.35 00:30:21.715 Starting 24 threads 00:30:21.715 EAL: No free 2048 kB hugepages reported on node 1 00:30:33.907 00:30:33.907 filename0: (groupid=0, jobs=1): err= 0: pid=1079167: Thu Jul 25 01:30:54 2024 00:30:33.907 read: IOPS=542, BW=2170KiB/s (2222kB/s)(21.2MiB/10021msec) 00:30:33.907 slat (nsec): min=3611, max=71217, avg=14836.70, stdev=10012.25 00:30:33.907 clat (usec): min=11844, max=54133, avg=29401.32, stdev=5856.27 00:30:33.907 lat (usec): min=11851, max=54140, avg=29416.16, stdev=5857.33 00:30:33.907 clat percentiles (usec): 00:30:33.907 | 1.00th=[14484], 5.00th=[21365], 10.00th=[24773], 20.00th=[25560], 00:30:33.907 | 30.00th=[26084], 40.00th=[26608], 50.00th=[27395], 60.00th=[29754], 00:30:33.907 | 70.00th=[32375], 80.00th=[33817], 90.00th=[36963], 95.00th=[39584], 00:30:33.907 | 99.00th=[46400], 99.50th=[50594], 99.90th=[53740], 99.95th=[54264], 00:30:33.907 | 99.99th=[54264] 00:30:33.907 bw ( KiB/s): min= 2000, max= 2352, per=3.97%, avg=2168.60, stdev=108.33, samples=20 00:30:33.907 iops : min= 500, max= 588, avg=542.15, stdev=27.08, samples=20 00:30:33.907 lat (msec) : 20=4.14%, 50=95.29%, 100=0.57% 00:30:33.907 cpu : usr=98.09%, sys=1.46%, ctx=17, majf=0, minf=45 00:30:33.907 IO depths : 1=1.0%, 2=2.1%, 4=8.8%, 8=75.8%, 16=12.3%, 32=0.0%, >=64=0.0% 00:30:33.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.907 complete : 0=0.0%, 4=89.8%, 8=5.3%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.907 issued rwts: total=5437,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:33.907 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:33.907 filename0: (groupid=0, jobs=1): err= 0: pid=1079168: Thu Jul 25 01:30:54 2024 00:30:33.907 read: IOPS=591, BW=2367KiB/s (2424kB/s)(23.1MiB/10015msec) 00:30:33.907 slat (nsec): min=6860, max=97111, avg=26513.14, stdev=18500.55 00:30:33.907 clat (usec): min=8550, max=48278, avg=26896.31, stdev=4385.34 00:30:33.907 lat (usec): min=8598, max=48287, avg=26922.82, stdev=4385.57 00:30:33.907 clat percentiles (usec): 00:30:33.907 | 1.00th=[14615], 5.00th=[21627], 10.00th=[24249], 20.00th=[25035], 00:30:33.907 | 30.00th=[25560], 40.00th=[25822], 50.00th=[26084], 60.00th=[26608], 00:30:33.907 | 70.00th=[26870], 80.00th=[27657], 90.00th=[32637], 95.00th=[35914], 00:30:33.907 | 99.00th=[41681], 99.50th=[44303], 99.90th=[47973], 99.95th=[48497], 00:30:33.907 | 99.99th=[48497] 00:30:33.907 bw ( KiB/s): min= 2096, max= 2488, per=4.33%, avg=2364.20, stdev=99.15, samples=20 00:30:33.907 iops : min= 524, max= 622, avg=591.05, stdev=24.79, samples=20 00:30:33.907 lat (msec) : 10=0.13%, 20=4.03%, 50=95.83% 00:30:33.907 cpu : usr=98.54%, sys=1.03%, ctx=22, majf=0, minf=25 00:30:33.907 IO depths : 1=0.7%, 2=1.5%, 4=7.0%, 8=77.1%, 16=13.8%, 32=0.0%, >=64=0.0% 00:30:33.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.907 complete : 0=0.0%, 4=89.9%, 8=6.2%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.907 issued rwts: total=5926,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:33.907 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:33.907 filename0: (groupid=0, jobs=1): err= 0: pid=1079169: Thu Jul 25 01:30:54 2024 00:30:33.907 read: IOPS=568, BW=2275KiB/s (2329kB/s)(22.3MiB/10021msec) 00:30:33.907 slat (nsec): min=6852, max=78545, avg=18750.10, stdev=11153.11 00:30:33.907 clat (usec): min=11427, max=49382, avg=28011.82, stdev=4907.30 00:30:33.907 lat (usec): min=11441, max=49407, avg=28030.57, stdev=4907.59 00:30:33.908 clat percentiles (usec): 00:30:33.908 | 1.00th=[15664], 5.00th=[23200], 10.00th=[24511], 20.00th=[25297], 00:30:33.908 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26608], 60.00th=[26870], 00:30:33.908 | 70.00th=[27657], 80.00th=[32113], 90.00th=[35390], 95.00th=[38011], 00:30:33.908 | 99.00th=[43779], 99.50th=[45351], 99.90th=[47973], 99.95th=[49546], 00:30:33.908 | 99.99th=[49546] 00:30:33.908 bw ( KiB/s): min= 2048, max= 2432, per=4.16%, avg=2275.60, stdev=105.80, samples=20 00:30:33.908 iops : min= 512, max= 608, avg=568.90, stdev=26.45, samples=20 00:30:33.908 lat (msec) : 20=3.14%, 50=96.86% 00:30:33.908 cpu : usr=98.33%, sys=1.21%, ctx=19, majf=0, minf=31 00:30:33.908 IO depths : 1=0.6%, 2=1.3%, 4=7.9%, 8=76.8%, 16=13.5%, 32=0.0%, >=64=0.0% 00:30:33.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.908 complete : 0=0.0%, 4=90.2%, 8=5.5%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.908 issued rwts: total=5699,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:33.908 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:33.908 filename0: (groupid=0, jobs=1): err= 0: pid=1079170: Thu Jul 25 01:30:54 2024 00:30:33.908 read: IOPS=609, BW=2436KiB/s (2495kB/s)(23.8MiB/10021msec) 00:30:33.908 slat (nsec): min=6903, max=76791, avg=16246.37, stdev=10558.69 00:30:33.908 clat (usec): min=10904, max=50308, avg=26154.63, stdev=4780.95 00:30:33.908 lat (usec): min=10912, max=50322, avg=26170.88, stdev=4782.25 00:30:33.908 clat percentiles (usec): 00:30:33.908 | 1.00th=[14484], 5.00th=[17433], 10.00th=[19792], 20.00th=[24511], 00:30:33.908 | 30.00th=[25297], 40.00th=[25822], 50.00th=[26084], 60.00th=[26346], 00:30:33.908 | 70.00th=[26870], 80.00th=[27395], 90.00th=[32637], 95.00th=[34866], 00:30:33.908 | 99.00th=[40109], 99.50th=[41681], 99.90th=[47449], 99.95th=[50070], 00:30:33.908 | 99.99th=[50070] 00:30:33.908 bw ( KiB/s): min= 2096, max= 2832, per=4.46%, avg=2434.80, stdev=164.47, samples=20 00:30:33.908 iops : min= 524, max= 708, avg=608.70, stdev=41.12, samples=20 00:30:33.908 lat (msec) : 20=10.27%, 50=89.66%, 100=0.07% 00:30:33.908 cpu : usr=98.59%, sys=0.99%, ctx=20, majf=0, minf=31 00:30:33.908 IO depths : 1=2.1%, 2=4.7%, 4=15.5%, 8=66.7%, 16=11.1%, 32=0.0%, >=64=0.0% 00:30:33.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.908 complete : 0=0.0%, 4=91.9%, 8=3.0%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.908 issued rwts: total=6103,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:33.908 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:33.908 filename0: (groupid=0, jobs=1): err= 0: pid=1079171: Thu Jul 25 01:30:54 2024 00:30:33.908 read: IOPS=552, BW=2209KiB/s (2262kB/s)(21.6MiB/10004msec) 00:30:33.908 slat (nsec): min=6851, max=80866, avg=19498.22, stdev=13176.34 00:30:33.908 clat (usec): min=5714, max=58887, avg=28832.65, stdev=5470.43 00:30:33.908 lat (usec): min=5723, max=58905, avg=28852.15, stdev=5468.54 00:30:33.908 clat percentiles (usec): 00:30:33.908 | 1.00th=[15401], 5.00th=[23462], 10.00th=[24511], 20.00th=[25297], 00:30:33.908 | 30.00th=[25822], 40.00th=[26346], 50.00th=[26870], 60.00th=[27657], 00:30:33.908 | 70.00th=[31327], 80.00th=[33817], 90.00th=[36439], 95.00th=[38536], 00:30:33.908 | 99.00th=[43254], 99.50th=[49021], 99.90th=[58983], 99.95th=[58983], 00:30:33.908 | 99.99th=[58983] 00:30:33.908 bw ( KiB/s): min= 1792, max= 2560, per=4.05%, avg=2213.63, stdev=189.06, samples=19 00:30:33.908 iops : min= 448, max= 640, avg=553.37, stdev=47.33, samples=19 00:30:33.908 lat (msec) : 10=0.29%, 20=2.23%, 50=97.05%, 100=0.43% 00:30:33.908 cpu : usr=98.83%, sys=0.78%, ctx=14, majf=0, minf=37 00:30:33.908 IO depths : 1=1.6%, 2=3.9%, 4=14.4%, 8=68.6%, 16=11.4%, 32=0.0%, >=64=0.0% 00:30:33.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.908 complete : 0=0.0%, 4=91.7%, 8=3.2%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.908 issued rwts: total=5525,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:33.908 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:33.908 filename0: (groupid=0, jobs=1): err= 0: pid=1079172: Thu Jul 25 01:30:54 2024 00:30:33.908 read: IOPS=545, BW=2183KiB/s (2235kB/s)(21.3MiB/10008msec) 00:30:33.908 slat (nsec): min=6866, max=72428, avg=17242.92, stdev=11474.18 00:30:33.908 clat (usec): min=8496, max=55006, avg=29223.10, stdev=5807.91 00:30:33.908 lat (usec): min=8508, max=55025, avg=29240.34, stdev=5806.66 00:30:33.908 clat percentiles (usec): 00:30:33.908 | 1.00th=[14746], 5.00th=[23725], 10.00th=[24773], 20.00th=[25560], 00:30:33.908 | 30.00th=[26084], 40.00th=[26608], 50.00th=[26870], 60.00th=[28181], 00:30:33.908 | 70.00th=[31851], 80.00th=[34341], 90.00th=[36439], 95.00th=[39060], 00:30:33.908 | 99.00th=[47449], 99.50th=[49546], 99.90th=[52167], 99.95th=[54789], 00:30:33.908 | 99.99th=[54789] 00:30:33.908 bw ( KiB/s): min= 1920, max= 2352, per=3.98%, avg=2172.00, stdev=113.07, samples=19 00:30:33.908 iops : min= 480, max= 588, avg=543.00, stdev=28.27, samples=19 00:30:33.908 lat (msec) : 10=0.38%, 20=2.64%, 50=96.69%, 100=0.29% 00:30:33.908 cpu : usr=98.42%, sys=1.18%, ctx=10, majf=0, minf=46 00:30:33.908 IO depths : 1=0.3%, 2=0.8%, 4=7.5%, 8=77.8%, 16=13.5%, 32=0.0%, >=64=0.0% 00:30:33.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.908 complete : 0=0.0%, 4=90.2%, 8=5.2%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.908 issued rwts: total=5462,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:33.908 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:33.908 filename0: (groupid=0, jobs=1): err= 0: pid=1079173: Thu Jul 25 01:30:54 2024 00:30:33.908 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.3MiB/10012msec) 00:30:33.908 slat (nsec): min=6898, max=84857, avg=19738.57, stdev=12482.20 00:30:33.908 clat (usec): min=11524, max=51453, avg=27992.15, stdev=5043.17 00:30:33.908 lat (usec): min=11539, max=51461, avg=28011.89, stdev=5042.78 00:30:33.908 clat percentiles (usec): 00:30:33.908 | 1.00th=[16188], 5.00th=[22676], 10.00th=[24249], 20.00th=[25297], 00:30:33.908 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26346], 60.00th=[26870], 00:30:33.908 | 70.00th=[27657], 80.00th=[32375], 90.00th=[35914], 95.00th=[37487], 00:30:33.908 | 99.00th=[42730], 99.50th=[44303], 99.90th=[50594], 99.95th=[50594], 00:30:33.908 | 99.99th=[51643] 00:30:33.908 bw ( KiB/s): min= 2048, max= 2376, per=4.15%, avg=2269.05, stdev=83.10, samples=19 00:30:33.908 iops : min= 512, max= 594, avg=567.26, stdev=20.78, samples=19 00:30:33.908 lat (msec) : 20=3.39%, 50=96.51%, 100=0.11% 00:30:33.908 cpu : usr=98.60%, sys=1.01%, ctx=10, majf=0, minf=37 00:30:33.908 IO depths : 1=0.3%, 2=0.6%, 4=7.4%, 8=77.8%, 16=13.9%, 32=0.0%, >=64=0.0% 00:30:33.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.908 complete : 0=0.0%, 4=89.9%, 8=6.1%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.908 issued rwts: total=5700,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:33.908 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:33.908 filename0: (groupid=0, jobs=1): err= 0: pid=1079174: Thu Jul 25 01:30:54 2024 00:30:33.908 read: IOPS=580, BW=2323KiB/s (2379kB/s)(22.7MiB/10006msec) 00:30:33.908 slat (nsec): min=6517, max=77975, avg=18629.65, stdev=12270.09 00:30:33.908 clat (usec): min=6918, max=62358, avg=27432.89, stdev=5381.84 00:30:33.908 lat (usec): min=6929, max=62384, avg=27451.52, stdev=5381.86 00:30:33.908 clat percentiles (usec): 00:30:33.908 | 1.00th=[13042], 5.00th=[19792], 10.00th=[24249], 20.00th=[25035], 00:30:33.908 | 30.00th=[25560], 40.00th=[25822], 50.00th=[26346], 60.00th=[26608], 00:30:33.908 | 70.00th=[27132], 80.00th=[30278], 90.00th=[34866], 95.00th=[37487], 00:30:33.908 | 99.00th=[43779], 99.50th=[47973], 99.90th=[62129], 99.95th=[62129], 00:30:33.908 | 99.99th=[62129] 00:30:33.908 bw ( KiB/s): min= 1872, max= 2560, per=4.23%, avg=2312.00, stdev=150.47, samples=19 00:30:33.908 iops : min= 468, max= 640, avg=578.00, stdev=37.62, samples=19 00:30:33.908 lat (msec) : 10=0.19%, 20=4.84%, 50=94.61%, 100=0.36% 00:30:33.908 cpu : usr=98.43%, sys=1.17%, ctx=17, majf=0, minf=50 00:30:33.908 IO depths : 1=0.7%, 2=2.1%, 4=9.9%, 8=73.6%, 16=13.6%, 32=0.0%, >=64=0.0% 00:30:33.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.908 complete : 0=0.0%, 4=90.8%, 8=5.2%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.908 issued rwts: total=5811,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:33.908 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:33.908 filename1: (groupid=0, jobs=1): err= 0: pid=1079175: Thu Jul 25 01:30:54 2024 00:30:33.908 read: IOPS=571, BW=2286KiB/s (2340kB/s)(22.3MiB/10007msec) 00:30:33.908 slat (nsec): min=6115, max=75435, avg=19560.76, stdev=12203.86 00:30:33.908 clat (usec): min=6673, max=52592, avg=27895.56, stdev=5000.01 00:30:33.908 lat (usec): min=6681, max=52607, avg=27915.12, stdev=5000.11 00:30:33.908 clat percentiles (usec): 00:30:33.908 | 1.00th=[14091], 5.00th=[22414], 10.00th=[24249], 20.00th=[25297], 00:30:33.908 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26608], 60.00th=[26870], 00:30:33.908 | 70.00th=[27919], 80.00th=[31851], 90.00th=[34866], 95.00th=[37487], 00:30:33.908 | 99.00th=[41681], 99.50th=[44303], 99.90th=[52167], 99.95th=[52691], 00:30:33.908 | 99.99th=[52691] 00:30:33.908 bw ( KiB/s): min= 2048, max= 2464, per=4.17%, avg=2278.95, stdev=98.34, samples=19 00:30:33.908 iops : min= 512, max= 616, avg=569.74, stdev=24.59, samples=19 00:30:33.908 lat (msec) : 10=0.17%, 20=3.48%, 50=96.24%, 100=0.10% 00:30:33.908 cpu : usr=98.68%, sys=0.92%, ctx=19, majf=0, minf=29 00:30:33.908 IO depths : 1=0.5%, 2=1.1%, 4=7.3%, 8=77.7%, 16=13.4%, 32=0.0%, >=64=0.0% 00:30:33.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.908 complete : 0=0.0%, 4=89.7%, 8=6.0%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.908 issued rwts: total=5718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:33.908 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:33.908 filename1: (groupid=0, jobs=1): err= 0: pid=1079176: Thu Jul 25 01:30:54 2024 00:30:33.908 read: IOPS=564, BW=2257KiB/s (2311kB/s)(22.1MiB/10010msec) 00:30:33.908 slat (nsec): min=6523, max=87085, avg=18075.71, stdev=12943.15 00:30:33.908 clat (usec): min=10815, max=55891, avg=28251.43, stdev=5296.10 00:30:33.908 lat (usec): min=10830, max=55908, avg=28269.51, stdev=5295.11 00:30:33.908 clat percentiles (usec): 00:30:33.908 | 1.00th=[15533], 5.00th=[22676], 10.00th=[24511], 20.00th=[25297], 00:30:33.908 | 30.00th=[25822], 40.00th=[26346], 50.00th=[26608], 60.00th=[27132], 00:30:33.908 | 70.00th=[28443], 80.00th=[32375], 90.00th=[35390], 95.00th=[38536], 00:30:33.908 | 99.00th=[44827], 99.50th=[49021], 99.90th=[51643], 99.95th=[55837], 00:30:33.908 | 99.99th=[55837] 00:30:33.908 bw ( KiB/s): min= 2064, max= 2416, per=4.11%, avg=2247.79, stdev=105.36, samples=19 00:30:33.908 iops : min= 516, max= 604, avg=561.95, stdev=26.34, samples=19 00:30:33.909 lat (msec) : 20=3.06%, 50=96.72%, 100=0.21% 00:30:33.909 cpu : usr=98.33%, sys=1.18%, ctx=15, majf=0, minf=34 00:30:33.909 IO depths : 1=0.3%, 2=0.7%, 4=7.5%, 8=77.3%, 16=14.1%, 32=0.0%, >=64=0.0% 00:30:33.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.909 complete : 0=0.0%, 4=90.4%, 8=5.4%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.909 issued rwts: total=5648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:33.909 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:33.909 filename1: (groupid=0, jobs=1): err= 0: pid=1079177: Thu Jul 25 01:30:54 2024 00:30:33.909 read: IOPS=565, BW=2263KiB/s (2318kB/s)(22.1MiB/10008msec) 00:30:33.909 slat (nsec): min=6871, max=84973, avg=18147.34, stdev=12279.59 00:30:33.909 clat (usec): min=8227, max=58500, avg=28178.96, stdev=5239.38 00:30:33.909 lat (usec): min=8240, max=58518, avg=28197.11, stdev=5239.07 00:30:33.909 clat percentiles (usec): 00:30:33.909 | 1.00th=[14222], 5.00th=[22938], 10.00th=[24511], 20.00th=[25297], 00:30:33.909 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26608], 60.00th=[27132], 00:30:33.909 | 70.00th=[28443], 80.00th=[32375], 90.00th=[35914], 95.00th=[37487], 00:30:33.909 | 99.00th=[43779], 99.50th=[45876], 99.90th=[51643], 99.95th=[58459], 00:30:33.909 | 99.99th=[58459] 00:30:33.909 bw ( KiB/s): min= 2052, max= 2384, per=4.12%, avg=2249.89, stdev=82.24, samples=19 00:30:33.909 iops : min= 513, max= 596, avg=562.47, stdev=20.56, samples=19 00:30:33.909 lat (msec) : 10=0.39%, 20=2.97%, 50=96.45%, 100=0.19% 00:30:33.909 cpu : usr=98.36%, sys=1.24%, ctx=16, majf=0, minf=38 00:30:33.909 IO depths : 1=0.2%, 2=0.6%, 4=7.1%, 8=78.1%, 16=14.0%, 32=0.0%, >=64=0.0% 00:30:33.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.909 complete : 0=0.0%, 4=90.1%, 8=5.7%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.909 issued rwts: total=5663,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:33.909 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:33.909 filename1: (groupid=0, jobs=1): err= 0: pid=1079178: Thu Jul 25 01:30:54 2024 00:30:33.909 read: IOPS=596, BW=2385KiB/s (2442kB/s)(23.3MiB/10021msec) 00:30:33.909 slat (nsec): min=5295, max=86333, avg=23638.30, stdev=13783.21 00:30:33.909 clat (usec): min=11771, max=49855, avg=26598.23, stdev=3384.62 00:30:33.909 lat (usec): min=11782, max=49863, avg=26621.87, stdev=3383.97 00:30:33.909 clat percentiles (usec): 00:30:33.909 | 1.00th=[16909], 5.00th=[23462], 10.00th=[24511], 20.00th=[25035], 00:30:33.909 | 30.00th=[25560], 40.00th=[25822], 50.00th=[26084], 60.00th=[26346], 00:30:33.909 | 70.00th=[26608], 80.00th=[27132], 90.00th=[31327], 95.00th=[33817], 00:30:33.909 | 99.00th=[38536], 99.50th=[40109], 99.90th=[44827], 99.95th=[50070], 00:30:33.909 | 99.99th=[50070] 00:30:33.909 bw ( KiB/s): min= 2208, max= 2560, per=4.37%, avg=2388.80, stdev=89.78, samples=20 00:30:33.909 iops : min= 552, max= 640, avg=597.20, stdev=22.44, samples=20 00:30:33.909 lat (msec) : 20=2.66%, 50=97.34% 00:30:33.909 cpu : usr=98.69%, sys=0.91%, ctx=15, majf=0, minf=27 00:30:33.909 IO depths : 1=4.4%, 2=9.2%, 4=21.4%, 8=56.7%, 16=8.4%, 32=0.0%, >=64=0.0% 00:30:33.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.909 complete : 0=0.0%, 4=93.5%, 8=0.9%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.909 issued rwts: total=5975,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:33.909 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:33.909 filename1: (groupid=0, jobs=1): err= 0: pid=1079179: Thu Jul 25 01:30:54 2024 00:30:33.909 read: IOPS=557, BW=2228KiB/s (2282kB/s)(21.8MiB/10015msec) 00:30:33.909 slat (nsec): min=6773, max=76803, avg=15448.66, stdev=9234.19 00:30:33.909 clat (usec): min=12696, max=53845, avg=28623.44, stdev=5850.77 00:30:33.909 lat (usec): min=12703, max=53881, avg=28638.89, stdev=5852.19 00:30:33.909 clat percentiles (usec): 00:30:33.909 | 1.00th=[16057], 5.00th=[19792], 10.00th=[24249], 20.00th=[25297], 00:30:33.909 | 30.00th=[25822], 40.00th=[26346], 50.00th=[26608], 60.00th=[27395], 00:30:33.909 | 70.00th=[30802], 80.00th=[33424], 90.00th=[36439], 95.00th=[38536], 00:30:33.909 | 99.00th=[46400], 99.50th=[51119], 99.90th=[53216], 99.95th=[53740], 00:30:33.909 | 99.99th=[53740] 00:30:33.909 bw ( KiB/s): min= 2000, max= 2432, per=4.07%, avg=2225.20, stdev=125.16, samples=20 00:30:33.909 iops : min= 500, max= 608, avg=556.30, stdev=31.29, samples=20 00:30:33.909 lat (msec) : 20=5.34%, 50=94.07%, 100=0.59% 00:30:33.909 cpu : usr=98.53%, sys=1.07%, ctx=17, majf=0, minf=40 00:30:33.909 IO depths : 1=0.6%, 2=1.3%, 4=7.9%, 8=77.6%, 16=12.6%, 32=0.0%, >=64=0.0% 00:30:33.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.909 complete : 0=0.0%, 4=89.8%, 8=5.3%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.909 issued rwts: total=5579,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:33.909 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:33.909 filename1: (groupid=0, jobs=1): err= 0: pid=1079180: Thu Jul 25 01:30:54 2024 00:30:33.909 read: IOPS=522, BW=2090KiB/s (2140kB/s)(20.5MiB/10047msec) 00:30:33.909 slat (nsec): min=6434, max=82518, avg=17800.62, stdev=12290.52 00:30:33.909 clat (usec): min=11197, max=59200, avg=30520.21, stdev=5878.35 00:30:33.909 lat (usec): min=11209, max=59217, avg=30538.01, stdev=5876.58 00:30:33.909 clat percentiles (usec): 00:30:33.909 | 1.00th=[16057], 5.00th=[24511], 10.00th=[25035], 20.00th=[25822], 00:30:33.909 | 30.00th=[26346], 40.00th=[27132], 50.00th=[29230], 60.00th=[31851], 00:30:33.909 | 70.00th=[33424], 80.00th=[35390], 90.00th=[38011], 95.00th=[40633], 00:30:33.909 | 99.00th=[47449], 99.50th=[50070], 99.90th=[53216], 99.95th=[58983], 00:30:33.909 | 99.99th=[58983] 00:30:33.909 bw ( KiB/s): min= 1824, max= 2352, per=3.81%, avg=2084.63, stdev=140.41, samples=19 00:30:33.909 iops : min= 456, max= 588, avg=521.16, stdev=35.10, samples=19 00:30:33.909 lat (msec) : 20=1.81%, 50=97.64%, 100=0.55% 00:30:33.909 cpu : usr=98.74%, sys=0.87%, ctx=14, majf=0, minf=37 00:30:33.909 IO depths : 1=0.3%, 2=1.0%, 4=8.9%, 8=76.5%, 16=13.3%, 32=0.0%, >=64=0.0% 00:30:33.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.909 complete : 0=0.0%, 4=90.7%, 8=4.6%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.909 issued rwts: total=5249,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:33.909 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:33.909 filename1: (groupid=0, jobs=1): err= 0: pid=1079181: Thu Jul 25 01:30:54 2024 00:30:33.909 read: IOPS=572, BW=2289KiB/s (2344kB/s)(22.4MiB/10011msec) 00:30:33.909 slat (usec): min=5, max=337, avg=26.81, stdev=17.01 00:30:33.909 clat (usec): min=10897, max=52967, avg=27800.63, stdev=4994.61 00:30:33.909 lat (usec): min=10937, max=52978, avg=27827.44, stdev=4993.63 00:30:33.909 clat percentiles (usec): 00:30:33.909 | 1.00th=[15401], 5.00th=[22938], 10.00th=[24249], 20.00th=[25035], 00:30:33.909 | 30.00th=[25560], 40.00th=[26084], 50.00th=[26346], 60.00th=[26870], 00:30:33.909 | 70.00th=[27395], 80.00th=[31589], 90.00th=[34866], 95.00th=[38011], 00:30:33.909 | 99.00th=[43779], 99.50th=[47973], 99.90th=[50594], 99.95th=[52691], 00:30:33.909 | 99.99th=[53216] 00:30:33.909 bw ( KiB/s): min= 2080, max= 2448, per=4.19%, avg=2288.20, stdev=100.03, samples=20 00:30:33.909 iops : min= 520, max= 612, avg=572.05, stdev=25.01, samples=20 00:30:33.909 lat (msec) : 20=2.72%, 50=97.07%, 100=0.21% 00:30:33.909 cpu : usr=91.83%, sys=3.63%, ctx=300, majf=0, minf=40 00:30:33.909 IO depths : 1=0.8%, 2=1.6%, 4=9.0%, 8=75.7%, 16=13.1%, 32=0.0%, >=64=0.0% 00:30:33.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.909 complete : 0=0.0%, 4=90.4%, 8=5.0%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.909 issued rwts: total=5730,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:33.909 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:33.909 filename1: (groupid=0, jobs=1): err= 0: pid=1079182: Thu Jul 25 01:30:54 2024 00:30:33.909 read: IOPS=565, BW=2262KiB/s (2316kB/s)(22.1MiB/10017msec) 00:30:33.909 slat (nsec): min=6888, max=74710, avg=20213.00, stdev=12357.37 00:30:33.909 clat (usec): min=11581, max=53277, avg=28165.08, stdev=5167.17 00:30:33.909 lat (usec): min=11594, max=53288, avg=28185.30, stdev=5167.16 00:30:33.909 clat percentiles (usec): 00:30:33.909 | 1.00th=[15008], 5.00th=[21103], 10.00th=[24249], 20.00th=[25035], 00:30:33.909 | 30.00th=[25560], 40.00th=[26084], 50.00th=[26608], 60.00th=[27132], 00:30:33.909 | 70.00th=[29230], 80.00th=[33162], 90.00th=[35914], 95.00th=[38011], 00:30:33.909 | 99.00th=[42730], 99.50th=[43779], 99.90th=[46924], 99.95th=[48497], 00:30:33.909 | 99.99th=[53216] 00:30:33.909 bw ( KiB/s): min= 2048, max= 2408, per=4.14%, avg=2259.80, stdev=95.57, samples=20 00:30:33.909 iops : min= 512, max= 602, avg=564.95, stdev=23.89, samples=20 00:30:33.909 lat (msec) : 20=3.50%, 50=96.49%, 100=0.02% 00:30:33.909 cpu : usr=98.60%, sys=1.00%, ctx=17, majf=0, minf=33 00:30:33.909 IO depths : 1=0.6%, 2=1.2%, 4=8.6%, 8=76.4%, 16=13.3%, 32=0.0%, >=64=0.0% 00:30:33.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.909 complete : 0=0.0%, 4=90.1%, 8=5.6%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.909 issued rwts: total=5665,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:33.909 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:33.909 filename2: (groupid=0, jobs=1): err= 0: pid=1079183: Thu Jul 25 01:30:54 2024 00:30:33.909 read: IOPS=592, BW=2370KiB/s (2427kB/s)(23.2MiB/10010msec) 00:30:33.909 slat (nsec): min=5331, max=82236, avg=20935.80, stdev=14040.35 00:30:33.909 clat (usec): min=10385, max=66561, avg=26898.42, stdev=3752.51 00:30:33.909 lat (usec): min=10392, max=66576, avg=26919.36, stdev=3751.14 00:30:33.909 clat percentiles (usec): 00:30:33.909 | 1.00th=[17695], 5.00th=[23987], 10.00th=[24511], 20.00th=[25297], 00:30:33.909 | 30.00th=[25560], 40.00th=[26084], 50.00th=[26084], 60.00th=[26608], 00:30:33.909 | 70.00th=[26870], 80.00th=[27395], 90.00th=[30802], 95.00th=[34341], 00:30:33.909 | 99.00th=[42206], 99.50th=[44827], 99.90th=[50594], 99.95th=[66323], 00:30:33.909 | 99.99th=[66323] 00:30:33.909 bw ( KiB/s): min= 2104, max= 2512, per=4.32%, avg=2360.84, stdev=113.76, samples=19 00:30:33.909 iops : min= 526, max= 628, avg=590.21, stdev=28.44, samples=19 00:30:33.909 lat (msec) : 20=1.58%, 50=98.15%, 100=0.27% 00:30:33.909 cpu : usr=98.66%, sys=0.93%, ctx=14, majf=0, minf=37 00:30:33.909 IO depths : 1=0.1%, 2=0.1%, 4=5.8%, 8=79.7%, 16=14.3%, 32=0.0%, >=64=0.0% 00:30:33.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.909 complete : 0=0.0%, 4=89.4%, 8=6.7%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.909 issued rwts: total=5931,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:33.909 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:33.909 filename2: (groupid=0, jobs=1): err= 0: pid=1079184: Thu Jul 25 01:30:54 2024 00:30:33.909 read: IOPS=542, BW=2168KiB/s (2220kB/s)(21.2MiB/10009msec) 00:30:33.910 slat (usec): min=6, max=681, avg=22.63, stdev=20.73 00:30:33.910 clat (usec): min=11451, max=52646, avg=29374.88, stdev=5741.77 00:30:33.910 lat (usec): min=11463, max=52690, avg=29397.51, stdev=5741.01 00:30:33.910 clat percentiles (usec): 00:30:33.910 | 1.00th=[16712], 5.00th=[22414], 10.00th=[24773], 20.00th=[25560], 00:30:33.910 | 30.00th=[26084], 40.00th=[26346], 50.00th=[26870], 60.00th=[30016], 00:30:33.910 | 70.00th=[32637], 80.00th=[34341], 90.00th=[36963], 95.00th=[39060], 00:30:33.910 | 99.00th=[45351], 99.50th=[49021], 99.90th=[51643], 99.95th=[52691], 00:30:33.910 | 99.99th=[52691] 00:30:33.910 bw ( KiB/s): min= 1968, max= 2336, per=3.97%, avg=2168.20, stdev=95.39, samples=20 00:30:33.910 iops : min= 492, max= 584, avg=542.05, stdev=23.85, samples=20 00:30:33.910 lat (msec) : 20=3.76%, 50=96.04%, 100=0.20% 00:30:33.910 cpu : usr=95.99%, sys=1.90%, ctx=63, majf=0, minf=25 00:30:33.910 IO depths : 1=1.0%, 2=2.0%, 4=10.0%, 8=74.2%, 16=12.8%, 32=0.0%, >=64=0.0% 00:30:33.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.910 complete : 0=0.0%, 4=90.6%, 8=4.8%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.910 issued rwts: total=5426,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:33.910 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:33.910 filename2: (groupid=0, jobs=1): err= 0: pid=1079185: Thu Jul 25 01:30:54 2024 00:30:33.910 read: IOPS=610, BW=2443KiB/s (2502kB/s)(23.9MiB/10021msec) 00:30:33.910 slat (nsec): min=6869, max=82616, avg=20303.54, stdev=12675.51 00:30:33.910 clat (usec): min=6369, max=49609, avg=26040.46, stdev=4558.56 00:30:33.910 lat (usec): min=6379, max=49627, avg=26060.76, stdev=4559.85 00:30:33.910 clat percentiles (usec): 00:30:33.910 | 1.00th=[14484], 5.00th=[17433], 10.00th=[20055], 20.00th=[24511], 00:30:33.910 | 30.00th=[25297], 40.00th=[25560], 50.00th=[26084], 60.00th=[26346], 00:30:33.910 | 70.00th=[26608], 80.00th=[27132], 90.00th=[32113], 95.00th=[35390], 00:30:33.910 | 99.00th=[39584], 99.50th=[42206], 99.90th=[44827], 99.95th=[46924], 00:30:33.910 | 99.99th=[49546] 00:30:33.910 bw ( KiB/s): min= 2176, max= 3152, per=4.47%, avg=2442.00, stdev=240.36, samples=20 00:30:33.910 iops : min= 544, max= 788, avg=610.50, stdev=60.09, samples=20 00:30:33.910 lat (msec) : 10=0.29%, 20=9.17%, 50=90.54% 00:30:33.910 cpu : usr=98.46%, sys=1.08%, ctx=17, majf=0, minf=36 00:30:33.910 IO depths : 1=3.5%, 2=7.1%, 4=17.5%, 8=62.6%, 16=9.3%, 32=0.0%, >=64=0.0% 00:30:33.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.910 complete : 0=0.0%, 4=92.3%, 8=2.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.910 issued rwts: total=6121,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:33.910 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:33.910 filename2: (groupid=0, jobs=1): err= 0: pid=1079186: Thu Jul 25 01:30:54 2024 00:30:33.910 read: IOPS=607, BW=2430KiB/s (2488kB/s)(23.8MiB/10031msec) 00:30:33.910 slat (nsec): min=6628, max=89455, avg=27220.57, stdev=16561.45 00:30:33.910 clat (usec): min=9700, max=49906, avg=26179.45, stdev=3031.40 00:30:33.910 lat (usec): min=9712, max=49914, avg=26206.67, stdev=3031.99 00:30:33.910 clat percentiles (usec): 00:30:33.910 | 1.00th=[16909], 5.00th=[23462], 10.00th=[24511], 20.00th=[25035], 00:30:33.910 | 30.00th=[25560], 40.00th=[25822], 50.00th=[26084], 60.00th=[26346], 00:30:33.910 | 70.00th=[26608], 80.00th=[26870], 90.00th=[27395], 95.00th=[31851], 00:30:33.910 | 99.00th=[38011], 99.50th=[42730], 99.90th=[46400], 99.95th=[50070], 00:30:33.910 | 99.99th=[50070] 00:30:33.910 bw ( KiB/s): min= 2336, max= 2560, per=4.45%, avg=2431.40, stdev=49.46, samples=20 00:30:33.910 iops : min= 584, max= 640, avg=607.85, stdev=12.36, samples=20 00:30:33.910 lat (msec) : 10=0.05%, 20=2.84%, 50=97.11% 00:30:33.910 cpu : usr=97.65%, sys=1.25%, ctx=46, majf=0, minf=39 00:30:33.910 IO depths : 1=0.3%, 2=0.8%, 4=7.8%, 8=78.0%, 16=13.0%, 32=0.0%, >=64=0.0% 00:30:33.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.910 complete : 0=0.0%, 4=89.6%, 8=5.5%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.910 issued rwts: total=6094,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:33.910 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:33.910 filename2: (groupid=0, jobs=1): err= 0: pid=1079187: Thu Jul 25 01:30:54 2024 00:30:33.910 read: IOPS=566, BW=2266KiB/s (2320kB/s)(22.2MiB/10018msec) 00:30:33.910 slat (nsec): min=4957, max=80312, avg=20994.07, stdev=12921.32 00:30:33.910 clat (usec): min=12388, max=52298, avg=28129.55, stdev=5118.86 00:30:33.910 lat (usec): min=12408, max=52306, avg=28150.55, stdev=5118.11 00:30:33.910 clat percentiles (usec): 00:30:33.910 | 1.00th=[16188], 5.00th=[22414], 10.00th=[24511], 20.00th=[25297], 00:30:33.910 | 30.00th=[25560], 40.00th=[26084], 50.00th=[26346], 60.00th=[26870], 00:30:33.910 | 70.00th=[28443], 80.00th=[32375], 90.00th=[35390], 95.00th=[38011], 00:30:33.910 | 99.00th=[43779], 99.50th=[46924], 99.90th=[51643], 99.95th=[52167], 00:30:33.910 | 99.99th=[52167] 00:30:33.910 bw ( KiB/s): min= 2048, max= 2408, per=4.14%, avg=2263.20, stdev=92.89, samples=20 00:30:33.910 iops : min= 512, max= 602, avg=565.80, stdev=23.22, samples=20 00:30:33.910 lat (msec) : 20=3.63%, 50=96.23%, 100=0.14% 00:30:33.910 cpu : usr=98.61%, sys=0.99%, ctx=10, majf=0, minf=29 00:30:33.910 IO depths : 1=0.4%, 2=1.0%, 4=7.8%, 8=77.7%, 16=13.1%, 32=0.0%, >=64=0.0% 00:30:33.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.910 complete : 0=0.0%, 4=90.0%, 8=5.3%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.910 issued rwts: total=5674,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:33.910 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:33.910 filename2: (groupid=0, jobs=1): err= 0: pid=1079188: Thu Jul 25 01:30:54 2024 00:30:33.910 read: IOPS=563, BW=2256KiB/s (2310kB/s)(22.0MiB/10007msec) 00:30:33.910 slat (nsec): min=6879, max=83040, avg=17526.83, stdev=10472.79 00:30:33.910 clat (usec): min=12874, max=48560, avg=28267.61, stdev=4760.43 00:30:33.910 lat (usec): min=12888, max=48585, avg=28285.14, stdev=4762.21 00:30:33.910 clat percentiles (usec): 00:30:33.910 | 1.00th=[23987], 5.00th=[24511], 10.00th=[24773], 20.00th=[25297], 00:30:33.910 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26346], 60.00th=[26608], 00:30:33.910 | 70.00th=[27132], 80.00th=[31327], 90.00th=[35914], 95.00th=[39060], 00:30:33.910 | 99.00th=[43779], 99.50th=[47449], 99.90th=[48497], 99.95th=[48497], 00:30:33.910 | 99.99th=[48497] 00:30:33.910 bw ( KiB/s): min= 1792, max= 2528, per=4.18%, avg=2281.68, stdev=290.78, samples=19 00:30:33.910 iops : min= 448, max= 632, avg=570.42, stdev=72.70, samples=19 00:30:33.910 lat (msec) : 20=0.19%, 50=99.81% 00:30:33.910 cpu : usr=98.47%, sys=1.12%, ctx=14, majf=0, minf=21 00:30:33.910 IO depths : 1=2.1%, 2=4.1%, 4=8.7%, 8=70.6%, 16=14.4%, 32=0.0%, >=64=0.0% 00:30:33.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.910 complete : 0=0.0%, 4=91.0%, 8=7.1%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.910 issued rwts: total=5643,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:33.910 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:33.910 filename2: (groupid=0, jobs=1): err= 0: pid=1079189: Thu Jul 25 01:30:54 2024 00:30:33.910 read: IOPS=571, BW=2287KiB/s (2341kB/s)(22.4MiB/10022msec) 00:30:33.910 slat (nsec): min=6889, max=84071, avg=18258.60, stdev=11344.32 00:30:33.910 clat (usec): min=8814, max=53914, avg=27880.85, stdev=5192.44 00:30:33.910 lat (usec): min=8833, max=53930, avg=27899.11, stdev=5192.41 00:30:33.910 clat percentiles (usec): 00:30:33.910 | 1.00th=[15664], 5.00th=[21103], 10.00th=[24511], 20.00th=[25297], 00:30:33.910 | 30.00th=[25560], 40.00th=[26084], 50.00th=[26346], 60.00th=[26870], 00:30:33.910 | 70.00th=[27919], 80.00th=[31589], 90.00th=[35390], 95.00th=[38011], 00:30:33.910 | 99.00th=[44303], 99.50th=[46400], 99.90th=[47973], 99.95th=[53740], 00:30:33.910 | 99.99th=[53740] 00:30:33.910 bw ( KiB/s): min= 2048, max= 2589, per=4.18%, avg=2285.45, stdev=109.01, samples=20 00:30:33.910 iops : min= 512, max= 647, avg=571.35, stdev=27.22, samples=20 00:30:33.910 lat (msec) : 10=0.28%, 20=3.82%, 50=95.83%, 100=0.07% 00:30:33.910 cpu : usr=98.29%, sys=1.28%, ctx=16, majf=0, minf=32 00:30:33.910 IO depths : 1=0.5%, 2=1.1%, 4=7.8%, 8=77.1%, 16=13.6%, 32=0.0%, >=64=0.0% 00:30:33.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.910 complete : 0=0.0%, 4=90.0%, 8=5.9%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.910 issued rwts: total=5729,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:33.910 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:33.910 filename2: (groupid=0, jobs=1): err= 0: pid=1079190: Thu Jul 25 01:30:54 2024 00:30:33.910 read: IOPS=569, BW=2279KiB/s (2333kB/s)(22.3MiB/10015msec) 00:30:33.910 slat (nsec): min=6876, max=80642, avg=27984.49, stdev=17750.11 00:30:33.910 clat (usec): min=9821, max=47913, avg=27928.47, stdev=5050.92 00:30:33.910 lat (usec): min=9832, max=47961, avg=27956.45, stdev=5051.32 00:30:33.910 clat percentiles (usec): 00:30:33.910 | 1.00th=[15664], 5.00th=[21365], 10.00th=[24249], 20.00th=[25297], 00:30:33.910 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26608], 60.00th=[26870], 00:30:33.910 | 70.00th=[27919], 80.00th=[31851], 90.00th=[35390], 95.00th=[37487], 00:30:33.910 | 99.00th=[42730], 99.50th=[44827], 99.90th=[46924], 99.95th=[47973], 00:30:33.910 | 99.99th=[47973] 00:30:33.910 bw ( KiB/s): min= 2160, max= 2432, per=4.16%, avg=2275.60, stdev=75.52, samples=20 00:30:33.910 iops : min= 540, max= 608, avg=568.90, stdev=18.88, samples=20 00:30:33.910 lat (msec) : 10=0.07%, 20=4.05%, 50=95.88% 00:30:33.910 cpu : usr=98.57%, sys=1.01%, ctx=20, majf=0, minf=36 00:30:33.910 IO depths : 1=0.5%, 2=1.1%, 4=7.7%, 8=77.8%, 16=12.9%, 32=0.0%, >=64=0.0% 00:30:33.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.910 complete : 0=0.0%, 4=89.9%, 8=5.1%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.910 issued rwts: total=5705,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:33.910 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:33.910 00:30:33.910 Run status group 0 (all jobs): 00:30:33.910 READ: bw=53.3MiB/s (55.9MB/s), 2090KiB/s-2443KiB/s (2140kB/s-2502kB/s), io=536MiB (562MB), run=10004-10047msec 00:30:33.910 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:30:33.910 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:33.910 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:33.910 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:33.910 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:33.910 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:33.910 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:33.911 bdev_null0 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:33.911 [2024-07-25 01:30:54.660297] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:33.911 bdev_null1 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:33.911 { 00:30:33.911 "params": { 00:30:33.911 "name": "Nvme$subsystem", 00:30:33.911 "trtype": "$TEST_TRANSPORT", 00:30:33.911 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:33.911 "adrfam": "ipv4", 00:30:33.911 "trsvcid": "$NVMF_PORT", 00:30:33.911 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:33.911 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:33.911 "hdgst": ${hdgst:-false}, 00:30:33.911 "ddgst": ${ddgst:-false} 00:30:33.911 }, 00:30:33.911 "method": "bdev_nvme_attach_controller" 00:30:33.911 } 00:30:33.911 EOF 00:30:33.911 )") 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:33.911 01:30:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:33.911 { 00:30:33.911 "params": { 00:30:33.911 "name": "Nvme$subsystem", 00:30:33.911 "trtype": "$TEST_TRANSPORT", 00:30:33.911 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:33.911 "adrfam": "ipv4", 00:30:33.911 "trsvcid": "$NVMF_PORT", 00:30:33.911 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:33.911 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:33.911 "hdgst": ${hdgst:-false}, 00:30:33.911 "ddgst": ${ddgst:-false} 00:30:33.911 }, 00:30:33.912 "method": "bdev_nvme_attach_controller" 00:30:33.912 } 00:30:33.912 EOF 00:30:33.912 )") 00:30:33.912 01:30:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:33.912 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:33.912 01:30:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:33.912 01:30:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:33.912 01:30:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:33.912 01:30:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:33.912 "params": { 00:30:33.912 "name": "Nvme0", 00:30:33.912 "trtype": "tcp", 00:30:33.912 "traddr": "10.0.0.2", 00:30:33.912 "adrfam": "ipv4", 00:30:33.912 "trsvcid": "4420", 00:30:33.912 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:33.912 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:33.912 "hdgst": false, 00:30:33.912 "ddgst": false 00:30:33.912 }, 00:30:33.912 "method": "bdev_nvme_attach_controller" 00:30:33.912 },{ 00:30:33.912 "params": { 00:30:33.912 "name": "Nvme1", 00:30:33.912 "trtype": "tcp", 00:30:33.912 "traddr": "10.0.0.2", 00:30:33.912 "adrfam": "ipv4", 00:30:33.912 "trsvcid": "4420", 00:30:33.912 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:33.912 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:33.912 "hdgst": false, 00:30:33.912 "ddgst": false 00:30:33.912 }, 00:30:33.912 "method": "bdev_nvme_attach_controller" 00:30:33.912 }' 00:30:33.912 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:33.912 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:33.912 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:33.912 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:33.912 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:33.912 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:33.912 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:33.912 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:33.912 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:33.912 01:30:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:33.912 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:33.912 ... 00:30:33.912 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:33.912 ... 00:30:33.912 fio-3.35 00:30:33.912 Starting 4 threads 00:30:33.912 EAL: No free 2048 kB hugepages reported on node 1 00:30:39.181 00:30:39.181 filename0: (groupid=0, jobs=1): err= 0: pid=1081134: Thu Jul 25 01:31:00 2024 00:30:39.181 read: IOPS=2696, BW=21.1MiB/s (22.1MB/s)(105MiB/5002msec) 00:30:39.181 slat (nsec): min=6196, max=28945, avg=8401.43, stdev=2707.27 00:30:39.181 clat (usec): min=1109, max=45038, avg=2942.83, stdev=2895.63 00:30:39.181 lat (usec): min=1115, max=45050, avg=2951.24, stdev=2895.67 00:30:39.181 clat percentiles (usec): 00:30:39.181 | 1.00th=[ 1467], 5.00th=[ 1745], 10.00th=[ 1958], 20.00th=[ 2245], 00:30:39.181 | 30.00th=[ 2409], 40.00th=[ 2573], 50.00th=[ 2737], 60.00th=[ 2868], 00:30:39.181 | 70.00th=[ 3032], 80.00th=[ 3261], 90.00th=[ 3556], 95.00th=[ 3851], 00:30:39.181 | 99.00th=[ 4490], 99.50th=[ 5997], 99.90th=[44303], 99.95th=[44827], 00:30:39.181 | 99.99th=[44827] 00:30:39.181 bw ( KiB/s): min=19584, max=24848, per=28.63%, avg=21580.44, stdev=1695.69, samples=9 00:30:39.181 iops : min= 2448, max= 3106, avg=2697.56, stdev=211.96, samples=9 00:30:39.181 lat (msec) : 2=11.45%, 4=85.16%, 10=2.92%, 50=0.47% 00:30:39.181 cpu : usr=95.92%, sys=3.76%, ctx=6, majf=0, minf=9 00:30:39.181 IO depths : 1=0.4%, 2=2.3%, 4=66.1%, 8=31.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:39.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:39.181 complete : 0=0.0%, 4=94.4%, 8=5.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:39.181 issued rwts: total=13490,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:39.181 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:39.181 filename0: (groupid=0, jobs=1): err= 0: pid=1081135: Thu Jul 25 01:31:00 2024 00:30:39.181 read: IOPS=2586, BW=20.2MiB/s (21.2MB/s)(101MiB/5004msec) 00:30:39.181 slat (nsec): min=6178, max=75934, avg=8338.58, stdev=2666.11 00:30:39.181 clat (usec): min=990, max=46090, avg=3069.52, stdev=3727.18 00:30:39.181 lat (usec): min=997, max=46104, avg=3077.86, stdev=3727.19 00:30:39.181 clat percentiles (usec): 00:30:39.181 | 1.00th=[ 1418], 5.00th=[ 1762], 10.00th=[ 1975], 20.00th=[ 2212], 00:30:39.181 | 30.00th=[ 2409], 40.00th=[ 2540], 50.00th=[ 2704], 60.00th=[ 2868], 00:30:39.181 | 70.00th=[ 3032], 80.00th=[ 3228], 90.00th=[ 3589], 95.00th=[ 3982], 00:30:39.181 | 99.00th=[ 5276], 99.50th=[43779], 99.90th=[45351], 99.95th=[45351], 00:30:39.181 | 99.99th=[45876] 00:30:39.181 bw ( KiB/s): min=18192, max=23264, per=27.45%, avg=20692.80, stdev=1534.69, samples=10 00:30:39.181 iops : min= 2274, max= 2908, avg=2586.60, stdev=191.84, samples=10 00:30:39.181 lat (usec) : 1000=0.02% 00:30:39.181 lat (msec) : 2=10.86%, 4=84.38%, 10=3.94%, 50=0.80% 00:30:39.181 cpu : usr=96.44%, sys=3.22%, ctx=12, majf=0, minf=0 00:30:39.181 IO depths : 1=0.4%, 2=2.8%, 4=66.0%, 8=30.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:39.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:39.181 complete : 0=0.0%, 4=94.4%, 8=5.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:39.181 issued rwts: total=12941,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:39.181 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:39.181 filename1: (groupid=0, jobs=1): err= 0: pid=1081136: Thu Jul 25 01:31:00 2024 00:30:39.181 read: IOPS=2153, BW=16.8MiB/s (17.6MB/s)(84.1MiB/5001msec) 00:30:39.181 slat (nsec): min=6208, max=32801, avg=8335.84, stdev=2728.23 00:30:39.181 clat (usec): min=1207, max=45454, avg=3688.75, stdev=4096.86 00:30:39.181 lat (usec): min=1220, max=45466, avg=3697.08, stdev=4096.90 00:30:39.181 clat percentiles (usec): 00:30:39.181 | 1.00th=[ 1778], 5.00th=[ 2114], 10.00th=[ 2376], 20.00th=[ 2671], 00:30:39.181 | 30.00th=[ 2868], 40.00th=[ 3064], 50.00th=[ 3261], 60.00th=[ 3425], 00:30:39.181 | 70.00th=[ 3621], 80.00th=[ 3916], 90.00th=[ 4359], 95.00th=[ 4752], 00:30:39.181 | 99.00th=[ 6980], 99.50th=[44303], 99.90th=[45351], 99.95th=[45351], 00:30:39.181 | 99.99th=[45351] 00:30:39.181 bw ( KiB/s): min=14576, max=20816, per=22.85%, avg=17223.11, stdev=2141.33, samples=9 00:30:39.181 iops : min= 1822, max= 2602, avg=2152.89, stdev=267.67, samples=9 00:30:39.181 lat (msec) : 2=3.19%, 4=79.24%, 10=16.60%, 50=0.97% 00:30:39.181 cpu : usr=96.66%, sys=3.00%, ctx=6, majf=0, minf=9 00:30:39.181 IO depths : 1=0.4%, 2=4.0%, 4=70.3%, 8=25.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:39.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:39.181 complete : 0=0.0%, 4=90.1%, 8=9.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:39.181 issued rwts: total=10768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:39.181 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:39.181 filename1: (groupid=0, jobs=1): err= 0: pid=1081137: Thu Jul 25 01:31:00 2024 00:30:39.181 read: IOPS=1989, BW=15.5MiB/s (16.3MB/s)(77.8MiB/5004msec) 00:30:39.181 slat (nsec): min=6222, max=28312, avg=8583.39, stdev=2732.99 00:30:39.181 clat (usec): min=1288, max=46113, avg=3995.96, stdev=3423.18 00:30:39.181 lat (usec): min=1295, max=46125, avg=4004.54, stdev=3423.18 00:30:39.181 clat percentiles (usec): 00:30:39.181 | 1.00th=[ 1795], 5.00th=[ 2343], 10.00th=[ 2606], 20.00th=[ 2999], 00:30:39.181 | 30.00th=[ 3261], 40.00th=[ 3458], 50.00th=[ 3687], 60.00th=[ 3884], 00:30:39.181 | 70.00th=[ 4113], 80.00th=[ 4424], 90.00th=[ 4948], 95.00th=[ 5473], 00:30:39.181 | 99.00th=[ 7439], 99.50th=[44827], 99.90th=[45876], 99.95th=[45876], 00:30:39.181 | 99.99th=[45876] 00:30:39.181 bw ( KiB/s): min=13728, max=18192, per=21.11%, avg=15916.80, stdev=1553.86, samples=10 00:30:39.181 iops : min= 1716, max= 2274, avg=1989.60, stdev=194.23, samples=10 00:30:39.181 lat (msec) : 2=2.04%, 4=63.73%, 10=33.59%, 50=0.64% 00:30:39.181 cpu : usr=96.84%, sys=2.86%, ctx=5, majf=0, minf=10 00:30:39.181 IO depths : 1=0.3%, 2=3.1%, 4=65.8%, 8=30.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:39.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:39.181 complete : 0=0.0%, 4=93.8%, 8=6.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:39.181 issued rwts: total=9956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:39.181 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:39.181 00:30:39.181 Run status group 0 (all jobs): 00:30:39.181 READ: bw=73.6MiB/s (77.2MB/s), 15.5MiB/s-21.1MiB/s (16.3MB/s-22.1MB/s), io=368MiB (386MB), run=5001-5004msec 00:30:39.181 01:31:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:30:39.181 01:31:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:39.181 01:31:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:39.181 01:31:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:39.181 01:31:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:39.181 01:31:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:39.181 01:31:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.181 01:31:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:39.181 01:31:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.181 01:31:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:39.181 01:31:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.181 01:31:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:39.181 01:31:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.181 01:31:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:39.181 01:31:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:39.181 01:31:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:39.181 01:31:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:39.181 01:31:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.181 01:31:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:39.181 01:31:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.181 01:31:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:39.181 01:31:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.181 01:31:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:39.181 01:31:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.181 00:30:39.181 real 0m23.850s 00:30:39.181 user 4m49.845s 00:30:39.181 sys 0m5.162s 00:30:39.181 01:31:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:39.181 01:31:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:39.181 ************************************ 00:30:39.181 END TEST fio_dif_rand_params 00:30:39.182 ************************************ 00:30:39.182 01:31:00 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:30:39.182 01:31:00 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:30:39.182 01:31:00 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:39.182 01:31:00 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:39.182 01:31:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:39.182 ************************************ 00:30:39.182 START TEST fio_dif_digest 00:30:39.182 ************************************ 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:39.182 bdev_null0 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:39.182 [2024-07-25 01:31:00.965616] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:39.182 { 00:30:39.182 "params": { 00:30:39.182 "name": "Nvme$subsystem", 00:30:39.182 "trtype": "$TEST_TRANSPORT", 00:30:39.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:39.182 "adrfam": "ipv4", 00:30:39.182 "trsvcid": "$NVMF_PORT", 00:30:39.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:39.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:39.182 "hdgst": ${hdgst:-false}, 00:30:39.182 "ddgst": ${ddgst:-false} 00:30:39.182 }, 00:30:39.182 "method": "bdev_nvme_attach_controller" 00:30:39.182 } 00:30:39.182 EOF 00:30:39.182 )") 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:30:39.182 01:31:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:39.182 "params": { 00:30:39.182 "name": "Nvme0", 00:30:39.182 "trtype": "tcp", 00:30:39.182 "traddr": "10.0.0.2", 00:30:39.182 "adrfam": "ipv4", 00:30:39.182 "trsvcid": "4420", 00:30:39.182 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:39.182 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:39.182 "hdgst": true, 00:30:39.182 "ddgst": true 00:30:39.182 }, 00:30:39.182 "method": "bdev_nvme_attach_controller" 00:30:39.182 }' 00:30:39.182 01:31:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:39.182 01:31:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:39.182 01:31:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:39.182 01:31:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:39.182 01:31:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:39.182 01:31:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:39.182 01:31:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:39.182 01:31:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:39.182 01:31:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:39.182 01:31:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:39.182 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:39.182 ... 00:30:39.182 fio-3.35 00:30:39.182 Starting 3 threads 00:30:39.182 EAL: No free 2048 kB hugepages reported on node 1 00:30:51.382 00:30:51.382 filename0: (groupid=0, jobs=1): err= 0: pid=1082194: Thu Jul 25 01:31:11 2024 00:30:51.382 read: IOPS=203, BW=25.4MiB/s (26.7MB/s)(255MiB/10017msec) 00:30:51.382 slat (nsec): min=6441, max=44985, avg=12941.13, stdev=4574.00 00:30:51.382 clat (usec): min=6068, max=99597, avg=14726.75, stdev=11701.55 00:30:51.382 lat (usec): min=6077, max=99609, avg=14739.69, stdev=11701.61 00:30:51.382 clat percentiles (usec): 00:30:51.382 | 1.00th=[ 7242], 5.00th=[ 8455], 10.00th=[ 9241], 20.00th=[10159], 00:30:51.382 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11600], 60.00th=[11994], 00:30:51.382 | 70.00th=[12387], 80.00th=[13173], 90.00th=[15139], 95.00th=[52691], 00:30:51.382 | 99.00th=[57410], 99.50th=[58459], 99.90th=[67634], 99.95th=[68682], 00:30:51.382 | 99.99th=[99091] 00:30:51.382 bw ( KiB/s): min=18688, max=30976, per=28.43%, avg=26058.11, stdev=3340.64, samples=19 00:30:51.382 iops : min= 146, max= 242, avg=203.58, stdev=26.10, samples=19 00:30:51.382 lat (msec) : 10=17.12%, 20=75.12%, 50=0.59%, 100=7.16% 00:30:51.382 cpu : usr=95.69%, sys=3.95%, ctx=18, majf=0, minf=129 00:30:51.382 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:51.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.382 issued rwts: total=2038,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:51.382 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:51.382 filename0: (groupid=0, jobs=1): err= 0: pid=1082195: Thu Jul 25 01:31:11 2024 00:30:51.382 read: IOPS=235, BW=29.5MiB/s (30.9MB/s)(296MiB/10024msec) 00:30:51.382 slat (nsec): min=6502, max=63127, avg=13225.51, stdev=4962.53 00:30:51.382 clat (usec): min=5510, max=64761, avg=12705.05, stdev=9456.19 00:30:51.382 lat (usec): min=5519, max=64788, avg=12718.27, stdev=9456.08 00:30:51.382 clat percentiles (usec): 00:30:51.382 | 1.00th=[ 5866], 5.00th=[ 7242], 10.00th=[ 8029], 20.00th=[ 8848], 00:30:51.382 | 30.00th=[ 9765], 40.00th=[10421], 50.00th=[10814], 60.00th=[11207], 00:30:51.382 | 70.00th=[11731], 80.00th=[12256], 90.00th=[13698], 95.00th=[48497], 00:30:51.382 | 99.00th=[54264], 99.50th=[55313], 99.90th=[64226], 99.95th=[64750], 00:30:51.382 | 99.99th=[64750] 00:30:51.382 bw ( KiB/s): min=23808, max=41728, per=32.97%, avg=30220.80, stdev=4522.22, samples=20 00:30:51.382 iops : min= 186, max= 326, avg=236.10, stdev=35.33, samples=20 00:30:51.382 lat (msec) : 10=32.70%, 20=62.23%, 50=0.68%, 100=4.40% 00:30:51.382 cpu : usr=95.28%, sys=4.35%, ctx=16, majf=0, minf=70 00:30:51.382 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:51.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.382 issued rwts: total=2364,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:51.382 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:51.382 filename0: (groupid=0, jobs=1): err= 0: pid=1082196: Thu Jul 25 01:31:11 2024 00:30:51.382 read: IOPS=277, BW=34.7MiB/s (36.4MB/s)(347MiB/10005msec) 00:30:51.382 slat (nsec): min=6464, max=55624, avg=12448.08, stdev=5276.33 00:30:51.382 clat (usec): min=5597, max=95465, avg=10800.56, stdev=5827.65 00:30:51.382 lat (usec): min=5605, max=95479, avg=10813.01, stdev=5828.13 00:30:51.382 clat percentiles (usec): 00:30:51.382 | 1.00th=[ 6063], 5.00th=[ 6783], 10.00th=[ 7373], 20.00th=[ 8160], 00:30:51.382 | 30.00th=[ 8848], 40.00th=[ 9896], 50.00th=[10421], 60.00th=[10945], 00:30:51.382 | 70.00th=[11338], 80.00th=[11863], 90.00th=[12518], 95.00th=[13698], 00:30:51.382 | 99.00th=[52691], 99.50th=[54264], 99.90th=[56886], 99.95th=[57934], 00:30:51.382 | 99.99th=[95945] 00:30:51.382 bw ( KiB/s): min=27136, max=41984, per=38.78%, avg=35543.58, stdev=3854.44, samples=19 00:30:51.382 iops : min= 212, max= 328, avg=277.68, stdev=30.11, samples=19 00:30:51.382 lat (msec) : 10=41.98%, 20=56.43%, 50=0.11%, 100=1.48% 00:30:51.382 cpu : usr=94.97%, sys=4.65%, ctx=14, majf=0, minf=151 00:30:51.382 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:51.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.382 issued rwts: total=2775,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:51.382 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:51.382 00:30:51.382 Run status group 0 (all jobs): 00:30:51.382 READ: bw=89.5MiB/s (93.8MB/s), 25.4MiB/s-34.7MiB/s (26.7MB/s-36.4MB/s), io=897MiB (941MB), run=10005-10024msec 00:30:51.382 01:31:11 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:30:51.382 01:31:11 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:30:51.382 01:31:11 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:30:51.382 01:31:11 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:51.382 01:31:11 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:30:51.382 01:31:11 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:51.382 01:31:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.382 01:31:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:51.382 01:31:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.382 01:31:11 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:51.382 01:31:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.382 01:31:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:51.382 01:31:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.382 00:30:51.382 real 0m10.984s 00:30:51.382 user 0m35.065s 00:30:51.382 sys 0m1.568s 00:30:51.382 01:31:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:51.382 01:31:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:51.382 ************************************ 00:30:51.382 END TEST fio_dif_digest 00:30:51.382 ************************************ 00:30:51.382 01:31:11 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:30:51.382 01:31:11 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:30:51.382 01:31:11 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:30:51.382 01:31:11 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:51.382 01:31:11 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:30:51.382 01:31:11 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:51.382 01:31:11 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:30:51.382 01:31:11 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:51.382 01:31:11 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:51.382 rmmod nvme_tcp 00:30:51.382 rmmod nvme_fabrics 00:30:51.382 rmmod nvme_keyring 00:30:51.383 01:31:12 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:51.383 01:31:12 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:30:51.383 01:31:12 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:30:51.383 01:31:12 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1073804 ']' 00:30:51.383 01:31:12 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1073804 00:30:51.383 01:31:12 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 1073804 ']' 00:30:51.383 01:31:12 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 1073804 00:30:51.383 01:31:12 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:30:51.383 01:31:12 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:51.383 01:31:12 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1073804 00:30:51.383 01:31:12 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:51.383 01:31:12 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:51.383 01:31:12 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1073804' 00:30:51.383 killing process with pid 1073804 00:30:51.383 01:31:12 nvmf_dif -- common/autotest_common.sh@967 -- # kill 1073804 00:30:51.383 01:31:12 nvmf_dif -- common/autotest_common.sh@972 -- # wait 1073804 00:30:51.383 01:31:12 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:30:51.383 01:31:12 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:51.946 Waiting for block devices as requested 00:30:52.204 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:30:52.204 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:52.204 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:52.462 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:52.462 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:52.462 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:52.462 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:52.720 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:52.720 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:52.720 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:52.720 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:52.978 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:52.978 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:52.978 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:53.236 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:53.236 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:53.236 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:53.236 01:31:15 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:53.237 01:31:15 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:53.237 01:31:15 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:53.237 01:31:15 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:53.237 01:31:15 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:53.237 01:31:15 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:53.237 01:31:15 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:55.766 01:31:17 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:55.766 00:30:55.766 real 1m12.113s 00:30:55.766 user 7m6.304s 00:30:55.766 sys 0m18.838s 00:30:55.766 01:31:17 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:55.766 01:31:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:55.766 ************************************ 00:30:55.766 END TEST nvmf_dif 00:30:55.766 ************************************ 00:30:55.766 01:31:17 -- common/autotest_common.sh@1142 -- # return 0 00:30:55.766 01:31:17 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:55.766 01:31:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:55.766 01:31:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:55.766 01:31:17 -- common/autotest_common.sh@10 -- # set +x 00:30:55.766 ************************************ 00:30:55.766 START TEST nvmf_abort_qd_sizes 00:30:55.766 ************************************ 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:55.767 * Looking for test storage... 00:30:55.767 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:30:55.767 01:31:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:01.097 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:01.097 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:01.097 Found net devices under 0000:86:00.0: cvl_0_0 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:01.097 Found net devices under 0000:86:00.1: cvl_0_1 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:01.097 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:01.097 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:31:01.097 00:31:01.097 --- 10.0.0.2 ping statistics --- 00:31:01.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:01.097 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:01.097 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:01.097 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:31:01.097 00:31:01.097 --- 10.0.0.1 ping statistics --- 00:31:01.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:01.097 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:31:01.097 01:31:23 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:03.630 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:03.630 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:03.630 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:03.630 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:03.630 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:03.630 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:03.630 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:03.630 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:03.630 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:03.630 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:03.630 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:03.630 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:03.630 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:03.630 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:03.630 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:03.630 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:04.197 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:31:04.457 01:31:26 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:04.457 01:31:26 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:04.457 01:31:26 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:04.457 01:31:26 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:04.457 01:31:26 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:04.457 01:31:26 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:04.457 01:31:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:31:04.457 01:31:26 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:04.457 01:31:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:04.457 01:31:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:04.457 01:31:26 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1089916 00:31:04.457 01:31:26 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1089916 00:31:04.457 01:31:26 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:31:04.457 01:31:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 1089916 ']' 00:31:04.457 01:31:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:04.457 01:31:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:04.457 01:31:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:04.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:04.457 01:31:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:04.457 01:31:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:04.457 [2024-07-25 01:31:26.814754] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:31:04.457 [2024-07-25 01:31:26.814791] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:04.457 EAL: No free 2048 kB hugepages reported on node 1 00:31:04.457 [2024-07-25 01:31:26.873201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:04.715 [2024-07-25 01:31:26.954266] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:04.715 [2024-07-25 01:31:26.954308] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:04.715 [2024-07-25 01:31:26.954315] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:04.715 [2024-07-25 01:31:26.954321] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:04.715 [2024-07-25 01:31:26.954326] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:04.715 [2024-07-25 01:31:26.954369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:04.715 [2024-07-25 01:31:26.954391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:04.715 [2024-07-25 01:31:26.954477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:04.715 [2024-07-25 01:31:26.954478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:05.281 01:31:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:05.281 01:31:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:31:05.281 01:31:27 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:05.282 01:31:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:05.282 01:31:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:05.282 01:31:27 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:05.282 01:31:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:31:05.282 01:31:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:31:05.282 01:31:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:31:05.282 01:31:27 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:31:05.282 01:31:27 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:31:05.282 01:31:27 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:5e:00.0 ]] 00:31:05.282 01:31:27 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:31:05.282 01:31:27 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:31:05.282 01:31:27 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:31:05.282 01:31:27 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:31:05.282 01:31:27 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:31:05.282 01:31:27 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:31:05.282 01:31:27 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:31:05.282 01:31:27 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:5e:00.0 00:31:05.282 01:31:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:31:05.282 01:31:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:31:05.282 01:31:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:31:05.282 01:31:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:05.282 01:31:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:05.282 01:31:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:05.282 ************************************ 00:31:05.282 START TEST spdk_target_abort 00:31:05.282 ************************************ 00:31:05.282 01:31:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:31:05.282 01:31:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:31:05.282 01:31:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:31:05.282 01:31:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.282 01:31:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:08.565 spdk_targetn1 00:31:08.565 01:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.566 01:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:08.566 01:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.566 01:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:08.566 [2024-07-25 01:31:30.547848] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:08.566 01:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.566 01:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:31:08.566 01:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.566 01:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:08.566 01:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.566 01:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:31:08.566 01:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.566 01:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:08.566 01:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.566 01:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:31:08.566 01:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.566 01:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:08.566 [2024-07-25 01:31:30.584927] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:08.566 01:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.566 01:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:31:08.566 01:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:08.566 01:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:08.566 01:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:31:08.566 01:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:08.566 01:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:08.566 01:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:08.566 01:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:08.566 01:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:08.566 01:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:08.566 01:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:08.566 01:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:08.566 01:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:08.566 01:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:08.566 01:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:31:08.566 01:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:08.566 01:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:08.566 01:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:08.566 01:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:08.566 01:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:08.566 01:31:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:08.566 EAL: No free 2048 kB hugepages reported on node 1 00:31:11.848 Initializing NVMe Controllers 00:31:11.848 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:11.848 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:11.848 Initialization complete. Launching workers. 00:31:11.848 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5455, failed: 0 00:31:11.848 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1675, failed to submit 3780 00:31:11.848 success 942, unsuccess 733, failed 0 00:31:11.848 01:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:11.848 01:31:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:11.848 EAL: No free 2048 kB hugepages reported on node 1 00:31:15.128 Initializing NVMe Controllers 00:31:15.128 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:15.128 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:15.128 Initialization complete. Launching workers. 00:31:15.128 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8611, failed: 0 00:31:15.128 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1228, failed to submit 7383 00:31:15.128 success 314, unsuccess 914, failed 0 00:31:15.128 01:31:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:15.128 01:31:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:15.128 EAL: No free 2048 kB hugepages reported on node 1 00:31:18.412 Initializing NVMe Controllers 00:31:18.412 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:18.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:18.412 Initialization complete. Launching workers. 00:31:18.413 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33764, failed: 0 00:31:18.413 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2795, failed to submit 30969 00:31:18.413 success 701, unsuccess 2094, failed 0 00:31:18.413 01:31:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:31:18.413 01:31:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.413 01:31:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:18.413 01:31:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.413 01:31:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:31:18.413 01:31:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.413 01:31:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:19.345 01:31:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.345 01:31:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1089916 00:31:19.345 01:31:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 1089916 ']' 00:31:19.345 01:31:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 1089916 00:31:19.345 01:31:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:31:19.345 01:31:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:19.345 01:31:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1089916 00:31:19.345 01:31:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:19.345 01:31:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:19.345 01:31:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1089916' 00:31:19.345 killing process with pid 1089916 00:31:19.345 01:31:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 1089916 00:31:19.345 01:31:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 1089916 00:31:19.604 00:31:19.604 real 0m14.188s 00:31:19.604 user 0m56.667s 00:31:19.604 sys 0m2.136s 00:31:19.604 01:31:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:19.604 01:31:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:19.604 ************************************ 00:31:19.604 END TEST spdk_target_abort 00:31:19.604 ************************************ 00:31:19.604 01:31:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:31:19.604 01:31:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:31:19.604 01:31:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:19.604 01:31:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:19.604 01:31:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:19.604 ************************************ 00:31:19.604 START TEST kernel_target_abort 00:31:19.604 ************************************ 00:31:19.605 01:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:31:19.605 01:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:31:19.605 01:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:31:19.605 01:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:19.605 01:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:19.605 01:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:19.605 01:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:19.605 01:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:19.605 01:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:19.605 01:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:19.605 01:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:19.605 01:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:19.605 01:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:19.605 01:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:19.605 01:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:19.605 01:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:19.605 01:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:19.605 01:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:19.605 01:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:31:19.605 01:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:19.605 01:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:19.605 01:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:19.605 01:31:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:22.136 Waiting for block devices as requested 00:31:22.136 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:31:22.136 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:22.394 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:22.394 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:22.394 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:22.394 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:22.653 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:22.653 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:22.653 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:22.653 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:22.912 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:22.912 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:22.912 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:22.912 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:23.170 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:23.170 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:23.170 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:23.429 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:23.429 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:23.429 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:23.429 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:31:23.429 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:23.429 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:23.429 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:23.430 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:23.430 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:23.430 No valid GPT data, bailing 00:31:23.430 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:23.430 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:31:23.430 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:31:23.430 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:23.430 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:23.430 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:23.430 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:23.430 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:23.430 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:23.430 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:31:23.430 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:23.430 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:31:23.430 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:23.430 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:31:23.430 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:31:23.430 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:31:23.430 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:23.430 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:31:23.430 00:31:23.430 Discovery Log Number of Records 2, Generation counter 2 00:31:23.430 =====Discovery Log Entry 0====== 00:31:23.430 trtype: tcp 00:31:23.430 adrfam: ipv4 00:31:23.430 subtype: current discovery subsystem 00:31:23.430 treq: not specified, sq flow control disable supported 00:31:23.430 portid: 1 00:31:23.430 trsvcid: 4420 00:31:23.430 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:23.430 traddr: 10.0.0.1 00:31:23.430 eflags: none 00:31:23.430 sectype: none 00:31:23.430 =====Discovery Log Entry 1====== 00:31:23.430 trtype: tcp 00:31:23.430 adrfam: ipv4 00:31:23.430 subtype: nvme subsystem 00:31:23.430 treq: not specified, sq flow control disable supported 00:31:23.430 portid: 1 00:31:23.430 trsvcid: 4420 00:31:23.430 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:23.430 traddr: 10.0.0.1 00:31:23.430 eflags: none 00:31:23.430 sectype: none 00:31:23.430 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:31:23.430 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:23.430 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:23.430 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:31:23.430 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:23.430 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:23.430 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:23.430 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:23.430 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:23.430 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:23.430 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:23.430 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:23.430 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:23.430 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:23.430 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:31:23.430 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:23.430 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:31:23.430 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:23.430 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:23.430 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:23.430 01:31:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:23.430 EAL: No free 2048 kB hugepages reported on node 1 00:31:26.800 Initializing NVMe Controllers 00:31:26.800 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:26.800 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:26.800 Initialization complete. Launching workers. 00:31:26.800 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29442, failed: 0 00:31:26.800 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29442, failed to submit 0 00:31:26.800 success 0, unsuccess 29442, failed 0 00:31:26.800 01:31:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:26.800 01:31:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:26.800 EAL: No free 2048 kB hugepages reported on node 1 00:31:30.090 Initializing NVMe Controllers 00:31:30.090 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:30.090 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:30.090 Initialization complete. Launching workers. 00:31:30.090 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 60976, failed: 0 00:31:30.090 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15398, failed to submit 45578 00:31:30.090 success 0, unsuccess 15398, failed 0 00:31:30.090 01:31:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:30.090 01:31:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:30.090 EAL: No free 2048 kB hugepages reported on node 1 00:31:32.622 Initializing NVMe Controllers 00:31:32.622 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:32.622 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:32.622 Initialization complete. Launching workers. 00:31:32.622 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 60483, failed: 0 00:31:32.622 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15090, failed to submit 45393 00:31:32.622 success 0, unsuccess 15090, failed 0 00:31:32.622 01:31:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:31:32.623 01:31:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:32.623 01:31:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:31:32.623 01:31:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:32.623 01:31:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:32.623 01:31:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:32.623 01:31:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:32.623 01:31:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:31:32.623 01:31:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:31:32.623 01:31:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:35.154 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:35.154 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:35.154 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:35.154 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:35.154 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:35.154 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:35.154 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:35.154 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:35.154 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:35.154 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:35.154 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:35.154 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:35.154 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:35.154 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:35.154 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:35.154 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:36.092 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:31:36.350 00:31:36.350 real 0m16.642s 00:31:36.350 user 0m4.402s 00:31:36.350 sys 0m5.256s 00:31:36.350 01:31:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:36.350 01:31:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:36.350 ************************************ 00:31:36.350 END TEST kernel_target_abort 00:31:36.350 ************************************ 00:31:36.350 01:31:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:31:36.350 01:31:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:36.350 01:31:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:31:36.350 01:31:58 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:36.350 01:31:58 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:31:36.350 01:31:58 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:36.350 01:31:58 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:31:36.350 01:31:58 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:36.350 01:31:58 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:36.350 rmmod nvme_tcp 00:31:36.350 rmmod nvme_fabrics 00:31:36.350 rmmod nvme_keyring 00:31:36.350 01:31:58 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:36.350 01:31:58 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:31:36.350 01:31:58 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:31:36.350 01:31:58 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1089916 ']' 00:31:36.350 01:31:58 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1089916 00:31:36.350 01:31:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 1089916 ']' 00:31:36.350 01:31:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 1089916 00:31:36.350 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1089916) - No such process 00:31:36.350 01:31:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 1089916 is not found' 00:31:36.350 Process with pid 1089916 is not found 00:31:36.350 01:31:58 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:36.350 01:31:58 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:38.888 Waiting for block devices as requested 00:31:38.888 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:31:38.888 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:38.888 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:38.888 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:38.888 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:39.147 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:39.147 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:39.147 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:39.147 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:39.406 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:39.406 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:39.406 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:39.664 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:39.664 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:39.664 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:39.920 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:39.920 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:39.920 01:32:02 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:39.920 01:32:02 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:39.920 01:32:02 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:39.920 01:32:02 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:39.920 01:32:02 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:39.920 01:32:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:39.920 01:32:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:42.454 01:32:04 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:42.454 00:31:42.454 real 0m46.506s 00:31:42.454 user 1m4.815s 00:31:42.454 sys 0m15.201s 00:31:42.454 01:32:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:42.454 01:32:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:42.454 ************************************ 00:31:42.454 END TEST nvmf_abort_qd_sizes 00:31:42.454 ************************************ 00:31:42.454 01:32:04 -- common/autotest_common.sh@1142 -- # return 0 00:31:42.454 01:32:04 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:31:42.454 01:32:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:42.454 01:32:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:42.454 01:32:04 -- common/autotest_common.sh@10 -- # set +x 00:31:42.454 ************************************ 00:31:42.454 START TEST keyring_file 00:31:42.454 ************************************ 00:31:42.454 01:32:04 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:31:42.454 * Looking for test storage... 00:31:42.454 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:31:42.454 01:32:04 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:31:42.454 01:32:04 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:42.454 01:32:04 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:31:42.454 01:32:04 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:42.454 01:32:04 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:42.454 01:32:04 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:42.454 01:32:04 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:42.454 01:32:04 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:42.454 01:32:04 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:42.454 01:32:04 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:42.454 01:32:04 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:42.454 01:32:04 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:42.454 01:32:04 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:42.454 01:32:04 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:42.454 01:32:04 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:42.454 01:32:04 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:42.454 01:32:04 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:42.454 01:32:04 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:42.454 01:32:04 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:42.454 01:32:04 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:42.454 01:32:04 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:42.454 01:32:04 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:42.454 01:32:04 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:42.454 01:32:04 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.454 01:32:04 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.454 01:32:04 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.454 01:32:04 keyring_file -- paths/export.sh@5 -- # export PATH 00:31:42.454 01:32:04 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.454 01:32:04 keyring_file -- nvmf/common.sh@47 -- # : 0 00:31:42.454 01:32:04 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:42.454 01:32:04 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:42.454 01:32:04 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:42.454 01:32:04 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:42.454 01:32:04 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:42.454 01:32:04 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:42.454 01:32:04 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:42.454 01:32:04 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:42.454 01:32:04 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:31:42.454 01:32:04 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:31:42.454 01:32:04 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:31:42.454 01:32:04 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:31:42.454 01:32:04 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:31:42.454 01:32:04 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:31:42.454 01:32:04 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:42.454 01:32:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:42.454 01:32:04 keyring_file -- keyring/common.sh@17 -- # name=key0 00:31:42.454 01:32:04 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:42.454 01:32:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:42.454 01:32:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:42.454 01:32:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.yvf06GaAef 00:31:42.454 01:32:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:42.454 01:32:04 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:42.454 01:32:04 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:42.454 01:32:04 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:42.454 01:32:04 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:42.454 01:32:04 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:42.454 01:32:04 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:42.454 01:32:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.yvf06GaAef 00:31:42.454 01:32:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.yvf06GaAef 00:31:42.454 01:32:04 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.yvf06GaAef 00:31:42.454 01:32:04 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:31:42.454 01:32:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:42.455 01:32:04 keyring_file -- keyring/common.sh@17 -- # name=key1 00:31:42.455 01:32:04 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:31:42.455 01:32:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:42.455 01:32:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:42.455 01:32:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.N6CgtzjfM8 00:31:42.455 01:32:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:31:42.455 01:32:04 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:31:42.455 01:32:04 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:42.455 01:32:04 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:42.455 01:32:04 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:31:42.455 01:32:04 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:42.455 01:32:04 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:42.455 01:32:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.N6CgtzjfM8 00:31:42.455 01:32:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.N6CgtzjfM8 00:31:42.455 01:32:04 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.N6CgtzjfM8 00:31:42.455 01:32:04 keyring_file -- keyring/file.sh@30 -- # tgtpid=1098558 00:31:42.455 01:32:04 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1098558 00:31:42.455 01:32:04 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:31:42.455 01:32:04 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1098558 ']' 00:31:42.455 01:32:04 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:42.455 01:32:04 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:42.455 01:32:04 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:42.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:42.455 01:32:04 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:42.455 01:32:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:42.455 [2024-07-25 01:32:04.665921] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:31:42.455 [2024-07-25 01:32:04.665972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1098558 ] 00:31:42.455 EAL: No free 2048 kB hugepages reported on node 1 00:31:42.455 [2024-07-25 01:32:04.719179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:42.455 [2024-07-25 01:32:04.798054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:43.023 01:32:05 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:43.023 01:32:05 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:31:43.023 01:32:05 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:31:43.023 01:32:05 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.023 01:32:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:43.023 [2024-07-25 01:32:05.466955] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:43.023 null0 00:31:43.023 [2024-07-25 01:32:05.499008] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:43.023 [2024-07-25 01:32:05.499275] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:43.023 [2024-07-25 01:32:05.507027] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:31:43.023 01:32:05 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.023 01:32:05 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:43.023 01:32:05 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:31:43.023 01:32:05 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:43.023 01:32:05 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:43.023 01:32:05 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:43.023 01:32:05 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:43.023 01:32:05 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:43.023 01:32:05 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:43.023 01:32:05 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.023 01:32:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:43.283 [2024-07-25 01:32:05.515037] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:31:43.283 request: 00:31:43.283 { 00:31:43.283 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:31:43.283 "secure_channel": false, 00:31:43.283 "listen_address": { 00:31:43.283 "trtype": "tcp", 00:31:43.283 "traddr": "127.0.0.1", 00:31:43.283 "trsvcid": "4420" 00:31:43.283 }, 00:31:43.283 "method": "nvmf_subsystem_add_listener", 00:31:43.283 "req_id": 1 00:31:43.283 } 00:31:43.283 Got JSON-RPC error response 00:31:43.283 response: 00:31:43.283 { 00:31:43.283 "code": -32602, 00:31:43.283 "message": "Invalid parameters" 00:31:43.283 } 00:31:43.283 01:32:05 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:43.283 01:32:05 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:31:43.283 01:32:05 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:43.283 01:32:05 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:43.283 01:32:05 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:43.283 01:32:05 keyring_file -- keyring/file.sh@46 -- # bperfpid=1098758 00:31:43.283 01:32:05 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1098758 /var/tmp/bperf.sock 00:31:43.283 01:32:05 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:31:43.283 01:32:05 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1098758 ']' 00:31:43.283 01:32:05 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:43.283 01:32:05 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:43.283 01:32:05 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:43.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:43.283 01:32:05 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:43.283 01:32:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:43.283 [2024-07-25 01:32:05.552215] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:31:43.283 [2024-07-25 01:32:05.552257] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1098758 ] 00:31:43.283 EAL: No free 2048 kB hugepages reported on node 1 00:31:43.283 [2024-07-25 01:32:05.603795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:43.283 [2024-07-25 01:32:05.676353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:44.219 01:32:06 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:44.219 01:32:06 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:31:44.219 01:32:06 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.yvf06GaAef 00:31:44.219 01:32:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.yvf06GaAef 00:31:44.219 01:32:06 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.N6CgtzjfM8 00:31:44.219 01:32:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.N6CgtzjfM8 00:31:44.219 01:32:06 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:31:44.219 01:32:06 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:31:44.219 01:32:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:44.219 01:32:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:44.219 01:32:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:44.479 01:32:06 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.yvf06GaAef == \/\t\m\p\/\t\m\p\.\y\v\f\0\6\G\a\A\e\f ]] 00:31:44.479 01:32:06 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:31:44.479 01:32:06 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:31:44.479 01:32:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:44.479 01:32:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:44.479 01:32:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:44.737 01:32:07 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.N6CgtzjfM8 == \/\t\m\p\/\t\m\p\.\N\6\C\g\t\z\j\f\M\8 ]] 00:31:44.737 01:32:07 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:31:44.737 01:32:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:44.737 01:32:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:44.737 01:32:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:44.737 01:32:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:44.737 01:32:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:44.737 01:32:07 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:31:44.737 01:32:07 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:31:44.737 01:32:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:44.737 01:32:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:44.737 01:32:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:44.737 01:32:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:44.737 01:32:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:44.995 01:32:07 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:31:44.995 01:32:07 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:44.996 01:32:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:45.254 [2024-07-25 01:32:07.550138] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:45.254 nvme0n1 00:31:45.254 01:32:07 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:31:45.254 01:32:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:45.254 01:32:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:45.254 01:32:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:45.254 01:32:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:45.254 01:32:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:45.511 01:32:07 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:31:45.511 01:32:07 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:31:45.511 01:32:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:45.511 01:32:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:45.511 01:32:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:45.511 01:32:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:45.511 01:32:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:45.769 01:32:08 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:31:45.769 01:32:08 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:45.769 Running I/O for 1 seconds... 00:31:46.700 00:31:46.700 Latency(us) 00:31:46.700 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:46.700 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:31:46.700 nvme0n1 : 1.03 3511.55 13.72 0.00 0.00 36087.53 12936.24 57443.73 00:31:46.700 =================================================================================================================== 00:31:46.700 Total : 3511.55 13.72 0.00 0.00 36087.53 12936.24 57443.73 00:31:46.700 0 00:31:46.700 01:32:09 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:46.700 01:32:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:46.959 01:32:09 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:31:46.959 01:32:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:46.959 01:32:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:46.959 01:32:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:46.959 01:32:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:46.959 01:32:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:47.217 01:32:09 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:31:47.217 01:32:09 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:31:47.217 01:32:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:47.217 01:32:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:47.217 01:32:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:47.217 01:32:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:47.217 01:32:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:47.217 01:32:09 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:31:47.217 01:32:09 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:47.217 01:32:09 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:31:47.217 01:32:09 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:47.217 01:32:09 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:31:47.217 01:32:09 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:47.217 01:32:09 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:31:47.217 01:32:09 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:47.217 01:32:09 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:47.217 01:32:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:47.490 [2024-07-25 01:32:09.836290] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:47.490 [2024-07-25 01:32:09.836860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c780 (107): Transport endpoint is not connected 00:31:47.490 [2024-07-25 01:32:09.837850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c780 (9): Bad file descriptor 00:31:47.490 [2024-07-25 01:32:09.838850] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:47.490 [2024-07-25 01:32:09.838860] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:31:47.490 [2024-07-25 01:32:09.838867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:47.490 request: 00:31:47.490 { 00:31:47.490 "name": "nvme0", 00:31:47.490 "trtype": "tcp", 00:31:47.490 "traddr": "127.0.0.1", 00:31:47.490 "adrfam": "ipv4", 00:31:47.490 "trsvcid": "4420", 00:31:47.490 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:47.490 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:47.490 "prchk_reftag": false, 00:31:47.490 "prchk_guard": false, 00:31:47.490 "hdgst": false, 00:31:47.490 "ddgst": false, 00:31:47.490 "psk": "key1", 00:31:47.490 "method": "bdev_nvme_attach_controller", 00:31:47.490 "req_id": 1 00:31:47.490 } 00:31:47.490 Got JSON-RPC error response 00:31:47.490 response: 00:31:47.490 { 00:31:47.490 "code": -5, 00:31:47.490 "message": "Input/output error" 00:31:47.490 } 00:31:47.491 01:32:09 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:31:47.491 01:32:09 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:47.491 01:32:09 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:47.491 01:32:09 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:47.491 01:32:09 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:31:47.491 01:32:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:47.491 01:32:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:47.491 01:32:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:47.491 01:32:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:47.491 01:32:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:47.761 01:32:10 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:31:47.761 01:32:10 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:31:47.761 01:32:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:47.761 01:32:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:47.761 01:32:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:47.761 01:32:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:47.761 01:32:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:47.761 01:32:10 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:31:47.761 01:32:10 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:31:47.761 01:32:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:48.020 01:32:10 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:31:48.020 01:32:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:31:48.279 01:32:10 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:31:48.279 01:32:10 keyring_file -- keyring/file.sh@77 -- # jq length 00:31:48.279 01:32:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:48.279 01:32:10 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:31:48.279 01:32:10 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.yvf06GaAef 00:31:48.279 01:32:10 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.yvf06GaAef 00:31:48.279 01:32:10 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:31:48.279 01:32:10 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.yvf06GaAef 00:31:48.279 01:32:10 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:31:48.279 01:32:10 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:48.279 01:32:10 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:31:48.279 01:32:10 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:48.279 01:32:10 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.yvf06GaAef 00:31:48.279 01:32:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.yvf06GaAef 00:31:48.539 [2024-07-25 01:32:10.895763] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.yvf06GaAef': 0100660 00:31:48.539 [2024-07-25 01:32:10.895791] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:31:48.539 request: 00:31:48.539 { 00:31:48.539 "name": "key0", 00:31:48.539 "path": "/tmp/tmp.yvf06GaAef", 00:31:48.539 "method": "keyring_file_add_key", 00:31:48.539 "req_id": 1 00:31:48.539 } 00:31:48.539 Got JSON-RPC error response 00:31:48.539 response: 00:31:48.539 { 00:31:48.539 "code": -1, 00:31:48.539 "message": "Operation not permitted" 00:31:48.539 } 00:31:48.539 01:32:10 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:31:48.539 01:32:10 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:48.539 01:32:10 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:48.539 01:32:10 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:48.539 01:32:10 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.yvf06GaAef 00:31:48.539 01:32:10 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.yvf06GaAef 00:31:48.539 01:32:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.yvf06GaAef 00:31:48.799 01:32:11 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.yvf06GaAef 00:31:48.799 01:32:11 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:31:48.799 01:32:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:48.799 01:32:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:48.799 01:32:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:48.799 01:32:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:48.799 01:32:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:48.799 01:32:11 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:31:48.799 01:32:11 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:48.799 01:32:11 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:31:48.799 01:32:11 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:48.799 01:32:11 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:31:48.799 01:32:11 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:48.799 01:32:11 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:31:48.799 01:32:11 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:48.799 01:32:11 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:48.799 01:32:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:49.057 [2024-07-25 01:32:11.433216] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.yvf06GaAef': No such file or directory 00:31:49.057 [2024-07-25 01:32:11.433238] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:31:49.057 [2024-07-25 01:32:11.433259] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:31:49.057 [2024-07-25 01:32:11.433265] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:49.057 [2024-07-25 01:32:11.433271] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:31:49.057 request: 00:31:49.057 { 00:31:49.057 "name": "nvme0", 00:31:49.057 "trtype": "tcp", 00:31:49.057 "traddr": "127.0.0.1", 00:31:49.057 "adrfam": "ipv4", 00:31:49.057 "trsvcid": "4420", 00:31:49.057 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:49.057 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:49.057 "prchk_reftag": false, 00:31:49.057 "prchk_guard": false, 00:31:49.057 "hdgst": false, 00:31:49.057 "ddgst": false, 00:31:49.057 "psk": "key0", 00:31:49.057 "method": "bdev_nvme_attach_controller", 00:31:49.057 "req_id": 1 00:31:49.057 } 00:31:49.057 Got JSON-RPC error response 00:31:49.057 response: 00:31:49.057 { 00:31:49.057 "code": -19, 00:31:49.057 "message": "No such device" 00:31:49.057 } 00:31:49.057 01:32:11 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:31:49.057 01:32:11 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:49.057 01:32:11 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:49.057 01:32:11 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:49.057 01:32:11 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:31:49.057 01:32:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:49.315 01:32:11 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:49.315 01:32:11 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:49.315 01:32:11 keyring_file -- keyring/common.sh@17 -- # name=key0 00:31:49.315 01:32:11 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:49.316 01:32:11 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:49.316 01:32:11 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:49.316 01:32:11 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.oOL2LlnpJe 00:31:49.316 01:32:11 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:49.316 01:32:11 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:49.316 01:32:11 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:49.316 01:32:11 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:49.316 01:32:11 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:49.316 01:32:11 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:49.316 01:32:11 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:49.316 01:32:11 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.oOL2LlnpJe 00:31:49.316 01:32:11 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.oOL2LlnpJe 00:31:49.316 01:32:11 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.oOL2LlnpJe 00:31:49.316 01:32:11 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.oOL2LlnpJe 00:31:49.316 01:32:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.oOL2LlnpJe 00:31:49.574 01:32:11 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:49.574 01:32:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:49.574 nvme0n1 00:31:49.574 01:32:12 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:31:49.574 01:32:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:49.574 01:32:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:49.574 01:32:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:49.574 01:32:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:49.574 01:32:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:49.832 01:32:12 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:31:49.832 01:32:12 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:31:49.832 01:32:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:50.089 01:32:12 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:31:50.089 01:32:12 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:31:50.089 01:32:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:50.089 01:32:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:50.089 01:32:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:50.347 01:32:12 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:31:50.347 01:32:12 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:31:50.347 01:32:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:50.347 01:32:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:50.347 01:32:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:50.347 01:32:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:50.347 01:32:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:50.347 01:32:12 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:31:50.347 01:32:12 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:50.347 01:32:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:50.605 01:32:12 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:31:50.605 01:32:12 keyring_file -- keyring/file.sh@104 -- # jq length 00:31:50.605 01:32:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:50.863 01:32:13 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:31:50.863 01:32:13 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.oOL2LlnpJe 00:31:50.863 01:32:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.oOL2LlnpJe 00:31:50.863 01:32:13 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.N6CgtzjfM8 00:31:50.863 01:32:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.N6CgtzjfM8 00:31:51.121 01:32:13 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:51.121 01:32:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:51.380 nvme0n1 00:31:51.380 01:32:13 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:31:51.380 01:32:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:31:51.639 01:32:13 keyring_file -- keyring/file.sh@112 -- # config='{ 00:31:51.639 "subsystems": [ 00:31:51.639 { 00:31:51.639 "subsystem": "keyring", 00:31:51.639 "config": [ 00:31:51.639 { 00:31:51.639 "method": "keyring_file_add_key", 00:31:51.639 "params": { 00:31:51.639 "name": "key0", 00:31:51.639 "path": "/tmp/tmp.oOL2LlnpJe" 00:31:51.639 } 00:31:51.639 }, 00:31:51.639 { 00:31:51.639 "method": "keyring_file_add_key", 00:31:51.639 "params": { 00:31:51.639 "name": "key1", 00:31:51.639 "path": "/tmp/tmp.N6CgtzjfM8" 00:31:51.639 } 00:31:51.639 } 00:31:51.639 ] 00:31:51.639 }, 00:31:51.639 { 00:31:51.639 "subsystem": "iobuf", 00:31:51.639 "config": [ 00:31:51.639 { 00:31:51.639 "method": "iobuf_set_options", 00:31:51.639 "params": { 00:31:51.639 "small_pool_count": 8192, 00:31:51.639 "large_pool_count": 1024, 00:31:51.639 "small_bufsize": 8192, 00:31:51.639 "large_bufsize": 135168 00:31:51.639 } 00:31:51.639 } 00:31:51.639 ] 00:31:51.639 }, 00:31:51.639 { 00:31:51.639 "subsystem": "sock", 00:31:51.639 "config": [ 00:31:51.639 { 00:31:51.639 "method": "sock_set_default_impl", 00:31:51.639 "params": { 00:31:51.639 "impl_name": "posix" 00:31:51.639 } 00:31:51.639 }, 00:31:51.639 { 00:31:51.639 "method": "sock_impl_set_options", 00:31:51.639 "params": { 00:31:51.639 "impl_name": "ssl", 00:31:51.639 "recv_buf_size": 4096, 00:31:51.639 "send_buf_size": 4096, 00:31:51.640 "enable_recv_pipe": true, 00:31:51.640 "enable_quickack": false, 00:31:51.640 "enable_placement_id": 0, 00:31:51.640 "enable_zerocopy_send_server": true, 00:31:51.640 "enable_zerocopy_send_client": false, 00:31:51.640 "zerocopy_threshold": 0, 00:31:51.640 "tls_version": 0, 00:31:51.640 "enable_ktls": false 00:31:51.640 } 00:31:51.640 }, 00:31:51.640 { 00:31:51.640 "method": "sock_impl_set_options", 00:31:51.640 "params": { 00:31:51.640 "impl_name": "posix", 00:31:51.640 "recv_buf_size": 2097152, 00:31:51.640 "send_buf_size": 2097152, 00:31:51.640 "enable_recv_pipe": true, 00:31:51.640 "enable_quickack": false, 00:31:51.640 "enable_placement_id": 0, 00:31:51.640 "enable_zerocopy_send_server": true, 00:31:51.640 "enable_zerocopy_send_client": false, 00:31:51.640 "zerocopy_threshold": 0, 00:31:51.640 "tls_version": 0, 00:31:51.640 "enable_ktls": false 00:31:51.640 } 00:31:51.640 } 00:31:51.640 ] 00:31:51.640 }, 00:31:51.640 { 00:31:51.640 "subsystem": "vmd", 00:31:51.640 "config": [] 00:31:51.640 }, 00:31:51.640 { 00:31:51.640 "subsystem": "accel", 00:31:51.640 "config": [ 00:31:51.640 { 00:31:51.640 "method": "accel_set_options", 00:31:51.640 "params": { 00:31:51.640 "small_cache_size": 128, 00:31:51.640 "large_cache_size": 16, 00:31:51.640 "task_count": 2048, 00:31:51.640 "sequence_count": 2048, 00:31:51.640 "buf_count": 2048 00:31:51.640 } 00:31:51.640 } 00:31:51.640 ] 00:31:51.640 }, 00:31:51.640 { 00:31:51.640 "subsystem": "bdev", 00:31:51.640 "config": [ 00:31:51.640 { 00:31:51.640 "method": "bdev_set_options", 00:31:51.640 "params": { 00:31:51.640 "bdev_io_pool_size": 65535, 00:31:51.640 "bdev_io_cache_size": 256, 00:31:51.640 "bdev_auto_examine": true, 00:31:51.640 "iobuf_small_cache_size": 128, 00:31:51.640 "iobuf_large_cache_size": 16 00:31:51.640 } 00:31:51.640 }, 00:31:51.640 { 00:31:51.640 "method": "bdev_raid_set_options", 00:31:51.640 "params": { 00:31:51.640 "process_window_size_kb": 1024 00:31:51.640 } 00:31:51.640 }, 00:31:51.640 { 00:31:51.640 "method": "bdev_iscsi_set_options", 00:31:51.640 "params": { 00:31:51.640 "timeout_sec": 30 00:31:51.640 } 00:31:51.640 }, 00:31:51.640 { 00:31:51.640 "method": "bdev_nvme_set_options", 00:31:51.640 "params": { 00:31:51.640 "action_on_timeout": "none", 00:31:51.640 "timeout_us": 0, 00:31:51.640 "timeout_admin_us": 0, 00:31:51.640 "keep_alive_timeout_ms": 10000, 00:31:51.640 "arbitration_burst": 0, 00:31:51.640 "low_priority_weight": 0, 00:31:51.640 "medium_priority_weight": 0, 00:31:51.640 "high_priority_weight": 0, 00:31:51.640 "nvme_adminq_poll_period_us": 10000, 00:31:51.640 "nvme_ioq_poll_period_us": 0, 00:31:51.640 "io_queue_requests": 512, 00:31:51.640 "delay_cmd_submit": true, 00:31:51.640 "transport_retry_count": 4, 00:31:51.640 "bdev_retry_count": 3, 00:31:51.640 "transport_ack_timeout": 0, 00:31:51.640 "ctrlr_loss_timeout_sec": 0, 00:31:51.640 "reconnect_delay_sec": 0, 00:31:51.640 "fast_io_fail_timeout_sec": 0, 00:31:51.640 "disable_auto_failback": false, 00:31:51.640 "generate_uuids": false, 00:31:51.640 "transport_tos": 0, 00:31:51.640 "nvme_error_stat": false, 00:31:51.640 "rdma_srq_size": 0, 00:31:51.640 "io_path_stat": false, 00:31:51.640 "allow_accel_sequence": false, 00:31:51.640 "rdma_max_cq_size": 0, 00:31:51.640 "rdma_cm_event_timeout_ms": 0, 00:31:51.640 "dhchap_digests": [ 00:31:51.640 "sha256", 00:31:51.640 "sha384", 00:31:51.640 "sha512" 00:31:51.640 ], 00:31:51.640 "dhchap_dhgroups": [ 00:31:51.640 "null", 00:31:51.640 "ffdhe2048", 00:31:51.640 "ffdhe3072", 00:31:51.641 "ffdhe4096", 00:31:51.641 "ffdhe6144", 00:31:51.641 "ffdhe8192" 00:31:51.641 ] 00:31:51.641 } 00:31:51.641 }, 00:31:51.641 { 00:31:51.641 "method": "bdev_nvme_attach_controller", 00:31:51.641 "params": { 00:31:51.641 "name": "nvme0", 00:31:51.641 "trtype": "TCP", 00:31:51.641 "adrfam": "IPv4", 00:31:51.641 "traddr": "127.0.0.1", 00:31:51.641 "trsvcid": "4420", 00:31:51.641 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:51.641 "prchk_reftag": false, 00:31:51.641 "prchk_guard": false, 00:31:51.641 "ctrlr_loss_timeout_sec": 0, 00:31:51.641 "reconnect_delay_sec": 0, 00:31:51.641 "fast_io_fail_timeout_sec": 0, 00:31:51.641 "psk": "key0", 00:31:51.641 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:51.641 "hdgst": false, 00:31:51.641 "ddgst": false 00:31:51.641 } 00:31:51.641 }, 00:31:51.641 { 00:31:51.641 "method": "bdev_nvme_set_hotplug", 00:31:51.641 "params": { 00:31:51.641 "period_us": 100000, 00:31:51.641 "enable": false 00:31:51.641 } 00:31:51.641 }, 00:31:51.641 { 00:31:51.641 "method": "bdev_wait_for_examine" 00:31:51.641 } 00:31:51.641 ] 00:31:51.641 }, 00:31:51.641 { 00:31:51.641 "subsystem": "nbd", 00:31:51.641 "config": [] 00:31:51.641 } 00:31:51.641 ] 00:31:51.641 }' 00:31:51.641 01:32:13 keyring_file -- keyring/file.sh@114 -- # killprocess 1098758 00:31:51.641 01:32:13 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1098758 ']' 00:31:51.641 01:32:13 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1098758 00:31:51.641 01:32:13 keyring_file -- common/autotest_common.sh@953 -- # uname 00:31:51.641 01:32:13 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:51.641 01:32:13 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1098758 00:31:51.641 01:32:13 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:51.641 01:32:13 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:51.641 01:32:13 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1098758' 00:31:51.641 killing process with pid 1098758 00:31:51.641 01:32:13 keyring_file -- common/autotest_common.sh@967 -- # kill 1098758 00:31:51.641 Received shutdown signal, test time was about 1.000000 seconds 00:31:51.641 00:31:51.641 Latency(us) 00:31:51.642 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:51.642 =================================================================================================================== 00:31:51.642 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:51.642 01:32:13 keyring_file -- common/autotest_common.sh@972 -- # wait 1098758 00:31:51.901 01:32:14 keyring_file -- keyring/file.sh@117 -- # bperfpid=1100275 00:31:51.901 01:32:14 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1100275 /var/tmp/bperf.sock 00:31:51.901 01:32:14 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1100275 ']' 00:31:51.901 01:32:14 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:51.901 01:32:14 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:51.901 01:32:14 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:51.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:51.901 01:32:14 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:51.901 01:32:14 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:31:51.901 01:32:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:51.901 01:32:14 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:31:51.901 "subsystems": [ 00:31:51.901 { 00:31:51.901 "subsystem": "keyring", 00:31:51.901 "config": [ 00:31:51.901 { 00:31:51.901 "method": "keyring_file_add_key", 00:31:51.901 "params": { 00:31:51.901 "name": "key0", 00:31:51.901 "path": "/tmp/tmp.oOL2LlnpJe" 00:31:51.901 } 00:31:51.901 }, 00:31:51.901 { 00:31:51.901 "method": "keyring_file_add_key", 00:31:51.901 "params": { 00:31:51.901 "name": "key1", 00:31:51.901 "path": "/tmp/tmp.N6CgtzjfM8" 00:31:51.901 } 00:31:51.901 } 00:31:51.901 ] 00:31:51.901 }, 00:31:51.901 { 00:31:51.901 "subsystem": "iobuf", 00:31:51.901 "config": [ 00:31:51.901 { 00:31:51.901 "method": "iobuf_set_options", 00:31:51.901 "params": { 00:31:51.901 "small_pool_count": 8192, 00:31:51.901 "large_pool_count": 1024, 00:31:51.901 "small_bufsize": 8192, 00:31:51.901 "large_bufsize": 135168 00:31:51.901 } 00:31:51.901 } 00:31:51.901 ] 00:31:51.901 }, 00:31:51.901 { 00:31:51.901 "subsystem": "sock", 00:31:51.901 "config": [ 00:31:51.901 { 00:31:51.901 "method": "sock_set_default_impl", 00:31:51.901 "params": { 00:31:51.901 "impl_name": "posix" 00:31:51.901 } 00:31:51.901 }, 00:31:51.901 { 00:31:51.901 "method": "sock_impl_set_options", 00:31:51.902 "params": { 00:31:51.902 "impl_name": "ssl", 00:31:51.902 "recv_buf_size": 4096, 00:31:51.902 "send_buf_size": 4096, 00:31:51.902 "enable_recv_pipe": true, 00:31:51.902 "enable_quickack": false, 00:31:51.902 "enable_placement_id": 0, 00:31:51.902 "enable_zerocopy_send_server": true, 00:31:51.902 "enable_zerocopy_send_client": false, 00:31:51.902 "zerocopy_threshold": 0, 00:31:51.902 "tls_version": 0, 00:31:51.902 "enable_ktls": false 00:31:51.902 } 00:31:51.902 }, 00:31:51.902 { 00:31:51.902 "method": "sock_impl_set_options", 00:31:51.902 "params": { 00:31:51.902 "impl_name": "posix", 00:31:51.902 "recv_buf_size": 2097152, 00:31:51.902 "send_buf_size": 2097152, 00:31:51.902 "enable_recv_pipe": true, 00:31:51.902 "enable_quickack": false, 00:31:51.902 "enable_placement_id": 0, 00:31:51.902 "enable_zerocopy_send_server": true, 00:31:51.902 "enable_zerocopy_send_client": false, 00:31:51.902 "zerocopy_threshold": 0, 00:31:51.902 "tls_version": 0, 00:31:51.902 "enable_ktls": false 00:31:51.902 } 00:31:51.902 } 00:31:51.902 ] 00:31:51.902 }, 00:31:51.902 { 00:31:51.902 "subsystem": "vmd", 00:31:51.902 "config": [] 00:31:51.902 }, 00:31:51.902 { 00:31:51.902 "subsystem": "accel", 00:31:51.902 "config": [ 00:31:51.902 { 00:31:51.902 "method": "accel_set_options", 00:31:51.902 "params": { 00:31:51.902 "small_cache_size": 128, 00:31:51.902 "large_cache_size": 16, 00:31:51.902 "task_count": 2048, 00:31:51.902 "sequence_count": 2048, 00:31:51.902 "buf_count": 2048 00:31:51.902 } 00:31:51.902 } 00:31:51.902 ] 00:31:51.902 }, 00:31:51.902 { 00:31:51.902 "subsystem": "bdev", 00:31:51.902 "config": [ 00:31:51.902 { 00:31:51.902 "method": "bdev_set_options", 00:31:51.902 "params": { 00:31:51.902 "bdev_io_pool_size": 65535, 00:31:51.902 "bdev_io_cache_size": 256, 00:31:51.902 "bdev_auto_examine": true, 00:31:51.902 "iobuf_small_cache_size": 128, 00:31:51.902 "iobuf_large_cache_size": 16 00:31:51.902 } 00:31:51.902 }, 00:31:51.902 { 00:31:51.902 "method": "bdev_raid_set_options", 00:31:51.902 "params": { 00:31:51.902 "process_window_size_kb": 1024 00:31:51.902 } 00:31:51.902 }, 00:31:51.902 { 00:31:51.902 "method": "bdev_iscsi_set_options", 00:31:51.902 "params": { 00:31:51.902 "timeout_sec": 30 00:31:51.902 } 00:31:51.902 }, 00:31:51.902 { 00:31:51.902 "method": "bdev_nvme_set_options", 00:31:51.902 "params": { 00:31:51.902 "action_on_timeout": "none", 00:31:51.902 "timeout_us": 0, 00:31:51.902 "timeout_admin_us": 0, 00:31:51.902 "keep_alive_timeout_ms": 10000, 00:31:51.902 "arbitration_burst": 0, 00:31:51.902 "low_priority_weight": 0, 00:31:51.902 "medium_priority_weight": 0, 00:31:51.902 "high_priority_weight": 0, 00:31:51.902 "nvme_adminq_poll_period_us": 10000, 00:31:51.902 "nvme_ioq_poll_period_us": 0, 00:31:51.902 "io_queue_requests": 512, 00:31:51.902 "delay_cmd_submit": true, 00:31:51.902 "transport_retry_count": 4, 00:31:51.902 "bdev_retry_count": 3, 00:31:51.902 "transport_ack_timeout": 0, 00:31:51.902 "ctrlr_loss_timeout_sec": 0, 00:31:51.902 "reconnect_delay_sec": 0, 00:31:51.902 "fast_io_fail_timeout_sec": 0, 00:31:51.902 "disable_auto_failback": false, 00:31:51.902 "generate_uuids": false, 00:31:51.902 "transport_tos": 0, 00:31:51.902 "nvme_error_stat": false, 00:31:51.902 "rdma_srq_size": 0, 00:31:51.902 "io_path_stat": false, 00:31:51.902 "allow_accel_sequence": false, 00:31:51.902 "rdma_max_cq_size": 0, 00:31:51.902 "rdma_cm_event_timeout_ms": 0, 00:31:51.902 "dhchap_digests": [ 00:31:51.902 "sha256", 00:31:51.902 "sha384", 00:31:51.902 "sha512" 00:31:51.902 ], 00:31:51.902 "dhchap_dhgroups": [ 00:31:51.902 "null", 00:31:51.902 "ffdhe2048", 00:31:51.902 "ffdhe3072", 00:31:51.902 "ffdhe4096", 00:31:51.902 "ffdhe6144", 00:31:51.902 "ffdhe8192" 00:31:51.902 ] 00:31:51.902 } 00:31:51.902 }, 00:31:51.902 { 00:31:51.902 "method": "bdev_nvme_attach_controller", 00:31:51.902 "params": { 00:31:51.902 "name": "nvme0", 00:31:51.902 "trtype": "TCP", 00:31:51.902 "adrfam": "IPv4", 00:31:51.902 "traddr": "127.0.0.1", 00:31:51.902 "trsvcid": "4420", 00:31:51.902 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:51.902 "prchk_reftag": false, 00:31:51.902 "prchk_guard": false, 00:31:51.902 "ctrlr_loss_timeout_sec": 0, 00:31:51.902 "reconnect_delay_sec": 0, 00:31:51.902 "fast_io_fail_timeout_sec": 0, 00:31:51.902 "psk": "key0", 00:31:51.902 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:51.902 "hdgst": false, 00:31:51.902 "ddgst": false 00:31:51.902 } 00:31:51.902 }, 00:31:51.902 { 00:31:51.902 "method": "bdev_nvme_set_hotplug", 00:31:51.902 "params": { 00:31:51.902 "period_us": 100000, 00:31:51.902 "enable": false 00:31:51.902 } 00:31:51.902 }, 00:31:51.902 { 00:31:51.902 "method": "bdev_wait_for_examine" 00:31:51.902 } 00:31:51.902 ] 00:31:51.902 }, 00:31:51.902 { 00:31:51.902 "subsystem": "nbd", 00:31:51.902 "config": [] 00:31:51.902 } 00:31:51.902 ] 00:31:51.902 }' 00:31:51.902 [2024-07-25 01:32:14.204137] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:31:51.902 [2024-07-25 01:32:14.204187] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1100275 ] 00:31:51.902 EAL: No free 2048 kB hugepages reported on node 1 00:31:51.902 [2024-07-25 01:32:14.255288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:51.902 [2024-07-25 01:32:14.326948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:52.161 [2024-07-25 01:32:14.486279] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:52.729 01:32:15 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:52.729 01:32:15 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:31:52.729 01:32:15 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:31:52.729 01:32:15 keyring_file -- keyring/file.sh@120 -- # jq length 00:31:52.729 01:32:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:52.729 01:32:15 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:31:52.729 01:32:15 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:31:52.729 01:32:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:52.729 01:32:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:52.729 01:32:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:52.729 01:32:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:52.729 01:32:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:52.988 01:32:15 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:31:52.988 01:32:15 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:31:52.988 01:32:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:52.988 01:32:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:52.988 01:32:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:52.988 01:32:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:52.988 01:32:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:53.246 01:32:15 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:31:53.246 01:32:15 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:31:53.246 01:32:15 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:31:53.246 01:32:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:31:53.246 01:32:15 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:31:53.246 01:32:15 keyring_file -- keyring/file.sh@1 -- # cleanup 00:31:53.246 01:32:15 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.oOL2LlnpJe /tmp/tmp.N6CgtzjfM8 00:31:53.246 01:32:15 keyring_file -- keyring/file.sh@20 -- # killprocess 1100275 00:31:53.246 01:32:15 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1100275 ']' 00:31:53.246 01:32:15 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1100275 00:31:53.246 01:32:15 keyring_file -- common/autotest_common.sh@953 -- # uname 00:31:53.246 01:32:15 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:53.246 01:32:15 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1100275 00:31:53.505 01:32:15 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:53.505 01:32:15 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:53.505 01:32:15 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1100275' 00:31:53.505 killing process with pid 1100275 00:31:53.505 01:32:15 keyring_file -- common/autotest_common.sh@967 -- # kill 1100275 00:31:53.505 Received shutdown signal, test time was about 1.000000 seconds 00:31:53.505 00:31:53.505 Latency(us) 00:31:53.505 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:53.505 =================================================================================================================== 00:31:53.505 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:53.505 01:32:15 keyring_file -- common/autotest_common.sh@972 -- # wait 1100275 00:31:53.505 01:32:15 keyring_file -- keyring/file.sh@21 -- # killprocess 1098558 00:31:53.505 01:32:15 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1098558 ']' 00:31:53.505 01:32:15 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1098558 00:31:53.505 01:32:15 keyring_file -- common/autotest_common.sh@953 -- # uname 00:31:53.505 01:32:15 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:53.505 01:32:15 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1098558 00:31:53.505 01:32:15 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:53.505 01:32:15 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:53.505 01:32:15 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1098558' 00:31:53.505 killing process with pid 1098558 00:31:53.505 01:32:15 keyring_file -- common/autotest_common.sh@967 -- # kill 1098558 00:31:53.505 [2024-07-25 01:32:15.984108] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:31:53.505 01:32:15 keyring_file -- common/autotest_common.sh@972 -- # wait 1098558 00:31:54.074 00:31:54.074 real 0m11.900s 00:31:54.074 user 0m27.703s 00:31:54.074 sys 0m2.686s 00:31:54.074 01:32:16 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:54.074 01:32:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:54.074 ************************************ 00:31:54.074 END TEST keyring_file 00:31:54.074 ************************************ 00:31:54.074 01:32:16 -- common/autotest_common.sh@1142 -- # return 0 00:31:54.074 01:32:16 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:31:54.074 01:32:16 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:31:54.074 01:32:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:54.074 01:32:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:54.074 01:32:16 -- common/autotest_common.sh@10 -- # set +x 00:31:54.074 ************************************ 00:31:54.074 START TEST keyring_linux 00:31:54.074 ************************************ 00:31:54.074 01:32:16 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:31:54.074 * Looking for test storage... 00:31:54.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:31:54.074 01:32:16 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:31:54.074 01:32:16 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:54.074 01:32:16 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:31:54.074 01:32:16 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:54.074 01:32:16 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:54.074 01:32:16 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:54.074 01:32:16 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:54.074 01:32:16 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:54.074 01:32:16 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:54.074 01:32:16 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:54.074 01:32:16 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:54.074 01:32:16 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:54.074 01:32:16 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:54.074 01:32:16 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:54.074 01:32:16 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:54.074 01:32:16 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:54.074 01:32:16 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:54.074 01:32:16 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:54.074 01:32:16 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:54.074 01:32:16 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:54.074 01:32:16 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:54.074 01:32:16 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:54.074 01:32:16 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:54.074 01:32:16 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.074 01:32:16 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.074 01:32:16 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.074 01:32:16 keyring_linux -- paths/export.sh@5 -- # export PATH 00:31:54.074 01:32:16 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.074 01:32:16 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:31:54.074 01:32:16 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:54.074 01:32:16 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:54.074 01:32:16 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:54.074 01:32:16 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:54.074 01:32:16 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:54.074 01:32:16 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:54.074 01:32:16 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:54.074 01:32:16 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:54.074 01:32:16 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:31:54.074 01:32:16 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:31:54.074 01:32:16 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:31:54.074 01:32:16 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:31:54.074 01:32:16 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:31:54.074 01:32:16 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:31:54.074 01:32:16 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:31:54.074 01:32:16 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:31:54.074 01:32:16 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:31:54.074 01:32:16 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:54.074 01:32:16 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:31:54.074 01:32:16 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:31:54.074 01:32:16 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:54.074 01:32:16 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:54.074 01:32:16 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:31:54.074 01:32:16 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:54.074 01:32:16 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:54.074 01:32:16 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:31:54.074 01:32:16 keyring_linux -- nvmf/common.sh@705 -- # python - 00:31:54.074 01:32:16 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:31:54.074 01:32:16 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:31:54.074 /tmp/:spdk-test:key0 00:31:54.074 01:32:16 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:31:54.074 01:32:16 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:31:54.074 01:32:16 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:31:54.074 01:32:16 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:31:54.074 01:32:16 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:31:54.074 01:32:16 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:31:54.074 01:32:16 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:31:54.074 01:32:16 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:31:54.074 01:32:16 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:31:54.074 01:32:16 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:54.074 01:32:16 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:31:54.074 01:32:16 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:31:54.074 01:32:16 keyring_linux -- nvmf/common.sh@705 -- # python - 00:31:54.075 01:32:16 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:31:54.075 01:32:16 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:31:54.075 /tmp/:spdk-test:key1 00:31:54.075 01:32:16 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1100813 00:31:54.075 01:32:16 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:31:54.075 01:32:16 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1100813 00:31:54.075 01:32:16 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1100813 ']' 00:31:54.075 01:32:16 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:54.075 01:32:16 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:54.075 01:32:16 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:54.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:54.334 01:32:16 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:54.334 01:32:16 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:54.334 [2024-07-25 01:32:16.606774] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:31:54.334 [2024-07-25 01:32:16.606823] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1100813 ] 00:31:54.334 EAL: No free 2048 kB hugepages reported on node 1 00:31:54.334 [2024-07-25 01:32:16.659486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:54.334 [2024-07-25 01:32:16.739646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:55.270 01:32:17 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:55.270 01:32:17 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:31:55.270 01:32:17 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:31:55.270 01:32:17 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.270 01:32:17 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:55.270 [2024-07-25 01:32:17.413096] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:55.270 null0 00:31:55.270 [2024-07-25 01:32:17.445146] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:55.270 [2024-07-25 01:32:17.445450] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:55.270 01:32:17 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.270 01:32:17 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:31:55.270 936961227 00:31:55.270 01:32:17 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:31:55.270 595393286 00:31:55.270 01:32:17 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1100832 00:31:55.270 01:32:17 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1100832 /var/tmp/bperf.sock 00:31:55.270 01:32:17 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:31:55.270 01:32:17 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1100832 ']' 00:31:55.271 01:32:17 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:55.271 01:32:17 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:55.271 01:32:17 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:55.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:55.271 01:32:17 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:55.271 01:32:17 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:55.271 [2024-07-25 01:32:17.515246] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:31:55.271 [2024-07-25 01:32:17.515293] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1100832 ] 00:31:55.271 EAL: No free 2048 kB hugepages reported on node 1 00:31:55.271 [2024-07-25 01:32:17.569687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:55.271 [2024-07-25 01:32:17.648845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:55.838 01:32:18 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:55.838 01:32:18 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:31:55.838 01:32:18 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:31:55.838 01:32:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:31:56.097 01:32:18 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:31:56.097 01:32:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:56.355 01:32:18 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:31:56.355 01:32:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:31:56.613 [2024-07-25 01:32:18.880267] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:56.613 nvme0n1 00:31:56.613 01:32:18 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:31:56.613 01:32:18 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:31:56.613 01:32:18 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:31:56.613 01:32:18 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:31:56.613 01:32:18 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:31:56.613 01:32:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:56.872 01:32:19 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:31:56.872 01:32:19 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:31:56.872 01:32:19 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:31:56.872 01:32:19 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:31:56.872 01:32:19 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:56.872 01:32:19 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:31:56.872 01:32:19 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:56.872 01:32:19 keyring_linux -- keyring/linux.sh@25 -- # sn=936961227 00:31:56.872 01:32:19 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:31:56.872 01:32:19 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:31:56.872 01:32:19 keyring_linux -- keyring/linux.sh@26 -- # [[ 936961227 == \9\3\6\9\6\1\2\2\7 ]] 00:31:56.872 01:32:19 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 936961227 00:31:56.872 01:32:19 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:31:56.872 01:32:19 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:57.131 Running I/O for 1 seconds... 00:31:58.065 00:31:58.065 Latency(us) 00:31:58.065 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:58.065 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:58.065 nvme0n1 : 1.03 2837.96 11.09 0.00 0.00 44428.67 16298.52 62002.75 00:31:58.065 =================================================================================================================== 00:31:58.065 Total : 2837.96 11.09 0.00 0.00 44428.67 16298.52 62002.75 00:31:58.065 0 00:31:58.065 01:32:20 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:58.065 01:32:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:58.323 01:32:20 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:31:58.323 01:32:20 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:31:58.323 01:32:20 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:31:58.323 01:32:20 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:31:58.323 01:32:20 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:31:58.324 01:32:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:58.582 01:32:20 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:31:58.582 01:32:20 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:31:58.582 01:32:20 keyring_linux -- keyring/linux.sh@23 -- # return 00:31:58.582 01:32:20 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:58.582 01:32:20 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:31:58.582 01:32:20 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:58.582 01:32:20 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:31:58.582 01:32:20 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:58.582 01:32:20 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:31:58.582 01:32:20 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:58.582 01:32:20 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:58.583 01:32:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:58.583 [2024-07-25 01:32:21.025025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fa1fd0 (107): Transport endpoint is not connected 00:31:58.583 [2024-07-25 01:32:21.025054] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:58.583 [2024-07-25 01:32:21.026019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fa1fd0 (9): Bad file descriptor 00:31:58.583 [2024-07-25 01:32:21.027017] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:58.583 [2024-07-25 01:32:21.027028] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:31:58.583 [2024-07-25 01:32:21.027034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:58.583 request: 00:31:58.583 { 00:31:58.583 "name": "nvme0", 00:31:58.583 "trtype": "tcp", 00:31:58.583 "traddr": "127.0.0.1", 00:31:58.583 "adrfam": "ipv4", 00:31:58.583 "trsvcid": "4420", 00:31:58.583 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:58.583 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:58.583 "prchk_reftag": false, 00:31:58.583 "prchk_guard": false, 00:31:58.583 "hdgst": false, 00:31:58.583 "ddgst": false, 00:31:58.583 "psk": ":spdk-test:key1", 00:31:58.583 "method": "bdev_nvme_attach_controller", 00:31:58.583 "req_id": 1 00:31:58.583 } 00:31:58.583 Got JSON-RPC error response 00:31:58.583 response: 00:31:58.583 { 00:31:58.583 "code": -5, 00:31:58.583 "message": "Input/output error" 00:31:58.583 } 00:31:58.583 01:32:21 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:31:58.583 01:32:21 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:58.583 01:32:21 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:58.583 01:32:21 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:58.583 01:32:21 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:31:58.583 01:32:21 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:31:58.583 01:32:21 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:31:58.583 01:32:21 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:31:58.583 01:32:21 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:31:58.583 01:32:21 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:31:58.583 01:32:21 keyring_linux -- keyring/linux.sh@33 -- # sn=936961227 00:31:58.583 01:32:21 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 936961227 00:31:58.583 1 links removed 00:31:58.583 01:32:21 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:31:58.583 01:32:21 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:31:58.583 01:32:21 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:31:58.583 01:32:21 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:31:58.583 01:32:21 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:31:58.583 01:32:21 keyring_linux -- keyring/linux.sh@33 -- # sn=595393286 00:31:58.583 01:32:21 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 595393286 00:31:58.583 1 links removed 00:31:58.583 01:32:21 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1100832 00:31:58.583 01:32:21 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1100832 ']' 00:31:58.583 01:32:21 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1100832 00:31:58.583 01:32:21 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:31:58.583 01:32:21 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:58.583 01:32:21 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1100832 00:31:58.842 01:32:21 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:58.842 01:32:21 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:58.842 01:32:21 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1100832' 00:31:58.842 killing process with pid 1100832 00:31:58.842 01:32:21 keyring_linux -- common/autotest_common.sh@967 -- # kill 1100832 00:31:58.842 Received shutdown signal, test time was about 1.000000 seconds 00:31:58.842 00:31:58.842 Latency(us) 00:31:58.842 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:58.842 =================================================================================================================== 00:31:58.842 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:58.842 01:32:21 keyring_linux -- common/autotest_common.sh@972 -- # wait 1100832 00:31:58.842 01:32:21 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1100813 00:31:58.842 01:32:21 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1100813 ']' 00:31:58.842 01:32:21 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1100813 00:31:58.842 01:32:21 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:31:58.842 01:32:21 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:58.842 01:32:21 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1100813 00:31:58.842 01:32:21 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:58.842 01:32:21 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:58.842 01:32:21 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1100813' 00:31:58.842 killing process with pid 1100813 00:31:58.842 01:32:21 keyring_linux -- common/autotest_common.sh@967 -- # kill 1100813 00:31:58.842 01:32:21 keyring_linux -- common/autotest_common.sh@972 -- # wait 1100813 00:31:59.410 00:31:59.410 real 0m5.282s 00:31:59.410 user 0m9.364s 00:31:59.410 sys 0m1.131s 00:31:59.410 01:32:21 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:59.411 01:32:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:59.411 ************************************ 00:31:59.411 END TEST keyring_linux 00:31:59.411 ************************************ 00:31:59.411 01:32:21 -- common/autotest_common.sh@1142 -- # return 0 00:31:59.411 01:32:21 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:31:59.411 01:32:21 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:31:59.411 01:32:21 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:31:59.411 01:32:21 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:31:59.411 01:32:21 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:31:59.411 01:32:21 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:31:59.411 01:32:21 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:31:59.411 01:32:21 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:31:59.411 01:32:21 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:31:59.411 01:32:21 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:31:59.411 01:32:21 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:31:59.411 01:32:21 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:31:59.411 01:32:21 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:31:59.411 01:32:21 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:31:59.411 01:32:21 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:31:59.411 01:32:21 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:31:59.411 01:32:21 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:31:59.411 01:32:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:59.411 01:32:21 -- common/autotest_common.sh@10 -- # set +x 00:31:59.411 01:32:21 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:31:59.411 01:32:21 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:31:59.411 01:32:21 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:31:59.411 01:32:21 -- common/autotest_common.sh@10 -- # set +x 00:32:03.598 INFO: APP EXITING 00:32:03.598 INFO: killing all VMs 00:32:03.598 INFO: killing vhost app 00:32:03.598 INFO: EXIT DONE 00:32:05.503 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:32:05.503 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:32:05.503 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:32:05.503 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:32:05.503 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:32:05.503 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:32:05.503 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:32:05.503 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:32:05.503 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:32:05.763 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:32:05.763 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:32:05.763 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:32:05.763 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:32:05.763 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:32:05.763 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:32:05.763 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:32:05.763 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:32:08.354 Cleaning 00:32:08.354 Removing: /var/run/dpdk/spdk0/config 00:32:08.354 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:08.354 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:08.354 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:08.354 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:08.354 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:08.354 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:08.354 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:08.354 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:08.354 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:08.354 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:08.354 Removing: /var/run/dpdk/spdk1/config 00:32:08.354 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:08.354 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:08.354 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:08.354 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:08.354 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:32:08.354 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:32:08.354 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:32:08.354 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:32:08.354 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:08.354 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:08.354 Removing: /var/run/dpdk/spdk1/mp_socket 00:32:08.354 Removing: /var/run/dpdk/spdk2/config 00:32:08.354 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:08.354 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:08.354 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:08.354 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:08.354 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:32:08.354 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:32:08.354 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:32:08.354 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:32:08.354 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:08.354 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:08.354 Removing: /var/run/dpdk/spdk3/config 00:32:08.354 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:08.354 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:08.354 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:08.354 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:08.354 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:32:08.354 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:32:08.354 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:32:08.354 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:32:08.354 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:08.354 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:08.354 Removing: /var/run/dpdk/spdk4/config 00:32:08.355 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:08.355 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:08.355 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:08.355 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:08.355 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:32:08.355 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:32:08.355 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:32:08.355 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:32:08.355 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:08.355 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:08.355 Removing: /dev/shm/bdev_svc_trace.1 00:32:08.355 Removing: /dev/shm/nvmf_trace.0 00:32:08.355 Removing: /dev/shm/spdk_tgt_trace.pid716814 00:32:08.355 Removing: /var/run/dpdk/spdk0 00:32:08.355 Removing: /var/run/dpdk/spdk1 00:32:08.355 Removing: /var/run/dpdk/spdk2 00:32:08.355 Removing: /var/run/dpdk/spdk3 00:32:08.355 Removing: /var/run/dpdk/spdk4 00:32:08.355 Removing: /var/run/dpdk/spdk_pid1003518 00:32:08.355 Removing: /var/run/dpdk/spdk_pid1009093 00:32:08.355 Removing: /var/run/dpdk/spdk_pid1017618 00:32:08.355 Removing: /var/run/dpdk/spdk_pid1024838 00:32:08.355 Removing: /var/run/dpdk/spdk_pid1024840 00:32:08.355 Removing: /var/run/dpdk/spdk_pid1043037 00:32:08.355 Removing: /var/run/dpdk/spdk_pid1043731 00:32:08.355 Removing: /var/run/dpdk/spdk_pid1044230 00:32:08.355 Removing: /var/run/dpdk/spdk_pid1044911 00:32:08.355 Removing: /var/run/dpdk/spdk_pid1045878 00:32:08.355 Removing: /var/run/dpdk/spdk_pid1046566 00:32:08.355 Removing: /var/run/dpdk/spdk_pid1047102 00:32:08.355 Removing: /var/run/dpdk/spdk_pid1047756 00:32:08.355 Removing: /var/run/dpdk/spdk_pid1052012 00:32:08.355 Removing: /var/run/dpdk/spdk_pid1052247 00:32:08.355 Removing: /var/run/dpdk/spdk_pid1058301 00:32:08.355 Removing: /var/run/dpdk/spdk_pid1058578 00:32:08.355 Removing: /var/run/dpdk/spdk_pid1060801 00:32:08.355 Removing: /var/run/dpdk/spdk_pid1068419 00:32:08.355 Removing: /var/run/dpdk/spdk_pid1068474 00:32:08.355 Removing: /var/run/dpdk/spdk_pid1073891 00:32:08.355 Removing: /var/run/dpdk/spdk_pid1075835 00:32:08.355 Removing: /var/run/dpdk/spdk_pid1077798 00:32:08.355 Removing: /var/run/dpdk/spdk_pid1078855 00:32:08.355 Removing: /var/run/dpdk/spdk_pid1080812 00:32:08.355 Removing: /var/run/dpdk/spdk_pid1081942 00:32:08.355 Removing: /var/run/dpdk/spdk_pid1090607 00:32:08.355 Removing: /var/run/dpdk/spdk_pid1091071 00:32:08.355 Removing: /var/run/dpdk/spdk_pid1091735 00:32:08.355 Removing: /var/run/dpdk/spdk_pid1093886 00:32:08.355 Removing: /var/run/dpdk/spdk_pid1094440 00:32:08.355 Removing: /var/run/dpdk/spdk_pid1094934 00:32:08.355 Removing: /var/run/dpdk/spdk_pid1098558 00:32:08.355 Removing: /var/run/dpdk/spdk_pid1098758 00:32:08.355 Removing: /var/run/dpdk/spdk_pid1100275 00:32:08.355 Removing: /var/run/dpdk/spdk_pid1100813 00:32:08.355 Removing: /var/run/dpdk/spdk_pid1100832 00:32:08.355 Removing: /var/run/dpdk/spdk_pid714545 00:32:08.355 Removing: /var/run/dpdk/spdk_pid715747 00:32:08.355 Removing: /var/run/dpdk/spdk_pid716814 00:32:08.355 Removing: /var/run/dpdk/spdk_pid717443 00:32:08.355 Removing: /var/run/dpdk/spdk_pid718397 00:32:08.355 Removing: /var/run/dpdk/spdk_pid718636 00:32:08.355 Removing: /var/run/dpdk/spdk_pid719613 00:32:08.355 Removing: /var/run/dpdk/spdk_pid719830 00:32:08.355 Removing: /var/run/dpdk/spdk_pid719962 00:32:08.355 Removing: /var/run/dpdk/spdk_pid721562 00:32:08.355 Removing: /var/run/dpdk/spdk_pid722734 00:32:08.355 Removing: /var/run/dpdk/spdk_pid723014 00:32:08.355 Removing: /var/run/dpdk/spdk_pid723306 00:32:08.355 Removing: /var/run/dpdk/spdk_pid723610 00:32:08.355 Removing: /var/run/dpdk/spdk_pid723964 00:32:08.355 Removing: /var/run/dpdk/spdk_pid724174 00:32:08.355 Removing: /var/run/dpdk/spdk_pid724398 00:32:08.355 Removing: /var/run/dpdk/spdk_pid724679 00:32:08.355 Removing: /var/run/dpdk/spdk_pid725642 00:32:08.355 Removing: /var/run/dpdk/spdk_pid728624 00:32:08.355 Removing: /var/run/dpdk/spdk_pid728897 00:32:08.355 Removing: /var/run/dpdk/spdk_pid729160 00:32:08.355 Removing: /var/run/dpdk/spdk_pid729390 00:32:08.355 Removing: /var/run/dpdk/spdk_pid729878 00:32:08.355 Removing: /var/run/dpdk/spdk_pid729894 00:32:08.355 Removing: /var/run/dpdk/spdk_pid730380 00:32:08.355 Removing: /var/run/dpdk/spdk_pid730592 00:32:08.355 Removing: /var/run/dpdk/spdk_pid730848 00:32:08.613 Removing: /var/run/dpdk/spdk_pid730889 00:32:08.613 Removing: /var/run/dpdk/spdk_pid731148 00:32:08.613 Removing: /var/run/dpdk/spdk_pid731376 00:32:08.613 Removing: /var/run/dpdk/spdk_pid731716 00:32:08.613 Removing: /var/run/dpdk/spdk_pid731963 00:32:08.613 Removing: /var/run/dpdk/spdk_pid732256 00:32:08.613 Removing: /var/run/dpdk/spdk_pid732523 00:32:08.613 Removing: /var/run/dpdk/spdk_pid732671 00:32:08.613 Removing: /var/run/dpdk/spdk_pid732827 00:32:08.613 Removing: /var/run/dpdk/spdk_pid733076 00:32:08.613 Removing: /var/run/dpdk/spdk_pid733331 00:32:08.613 Removing: /var/run/dpdk/spdk_pid733578 00:32:08.613 Removing: /var/run/dpdk/spdk_pid733833 00:32:08.613 Removing: /var/run/dpdk/spdk_pid734080 00:32:08.613 Removing: /var/run/dpdk/spdk_pid734327 00:32:08.613 Removing: /var/run/dpdk/spdk_pid734582 00:32:08.613 Removing: /var/run/dpdk/spdk_pid734828 00:32:08.613 Removing: /var/run/dpdk/spdk_pid735075 00:32:08.613 Removing: /var/run/dpdk/spdk_pid735326 00:32:08.613 Removing: /var/run/dpdk/spdk_pid735578 00:32:08.613 Removing: /var/run/dpdk/spdk_pid735825 00:32:08.613 Removing: /var/run/dpdk/spdk_pid736077 00:32:08.613 Removing: /var/run/dpdk/spdk_pid736324 00:32:08.613 Removing: /var/run/dpdk/spdk_pid736578 00:32:08.613 Removing: /var/run/dpdk/spdk_pid736825 00:32:08.613 Removing: /var/run/dpdk/spdk_pid737073 00:32:08.613 Removing: /var/run/dpdk/spdk_pid737332 00:32:08.613 Removing: /var/run/dpdk/spdk_pid737579 00:32:08.613 Removing: /var/run/dpdk/spdk_pid737831 00:32:08.613 Removing: /var/run/dpdk/spdk_pid738019 00:32:08.613 Removing: /var/run/dpdk/spdk_pid738420 00:32:08.613 Removing: /var/run/dpdk/spdk_pid742072 00:32:08.613 Removing: /var/run/dpdk/spdk_pid785494 00:32:08.613 Removing: /var/run/dpdk/spdk_pid789518 00:32:08.613 Removing: /var/run/dpdk/spdk_pid799628 00:32:08.613 Removing: /var/run/dpdk/spdk_pid805198 00:32:08.613 Removing: /var/run/dpdk/spdk_pid809640 00:32:08.613 Removing: /var/run/dpdk/spdk_pid810219 00:32:08.613 Removing: /var/run/dpdk/spdk_pid816171 00:32:08.613 Removing: /var/run/dpdk/spdk_pid822226 00:32:08.613 Removing: /var/run/dpdk/spdk_pid822306 00:32:08.613 Removing: /var/run/dpdk/spdk_pid823056 00:32:08.613 Removing: /var/run/dpdk/spdk_pid823967 00:32:08.613 Removing: /var/run/dpdk/spdk_pid824890 00:32:08.613 Removing: /var/run/dpdk/spdk_pid825357 00:32:08.613 Removing: /var/run/dpdk/spdk_pid825451 00:32:08.613 Removing: /var/run/dpdk/spdk_pid825735 00:32:08.613 Removing: /var/run/dpdk/spdk_pid825819 00:32:08.613 Removing: /var/run/dpdk/spdk_pid825821 00:32:08.613 Removing: /var/run/dpdk/spdk_pid826737 00:32:08.613 Removing: /var/run/dpdk/spdk_pid827654 00:32:08.613 Removing: /var/run/dpdk/spdk_pid828471 00:32:08.613 Removing: /var/run/dpdk/spdk_pid829036 00:32:08.613 Removing: /var/run/dpdk/spdk_pid829042 00:32:08.613 Removing: /var/run/dpdk/spdk_pid829281 00:32:08.613 Removing: /var/run/dpdk/spdk_pid830519 00:32:08.613 Removing: /var/run/dpdk/spdk_pid831623 00:32:08.613 Removing: /var/run/dpdk/spdk_pid839822 00:32:08.613 Removing: /var/run/dpdk/spdk_pid840286 00:32:08.613 Removing: /var/run/dpdk/spdk_pid844971 00:32:08.613 Removing: /var/run/dpdk/spdk_pid850702 00:32:08.613 Removing: /var/run/dpdk/spdk_pid853289 00:32:08.613 Removing: /var/run/dpdk/spdk_pid863698 00:32:08.613 Removing: /var/run/dpdk/spdk_pid872595 00:32:08.613 Removing: /var/run/dpdk/spdk_pid874415 00:32:08.613 Removing: /var/run/dpdk/spdk_pid875339 00:32:08.613 Removing: /var/run/dpdk/spdk_pid891836 00:32:08.871 Removing: /var/run/dpdk/spdk_pid896014 00:32:08.871 Removing: /var/run/dpdk/spdk_pid920591 00:32:08.871 Removing: /var/run/dpdk/spdk_pid925066 00:32:08.871 Removing: /var/run/dpdk/spdk_pid926712 00:32:08.871 Removing: /var/run/dpdk/spdk_pid928550 00:32:08.871 Removing: /var/run/dpdk/spdk_pid928839 00:32:08.871 Removing: /var/run/dpdk/spdk_pid929159 00:32:08.871 Removing: /var/run/dpdk/spdk_pid929404 00:32:08.871 Removing: /var/run/dpdk/spdk_pid929918 00:32:08.871 Removing: /var/run/dpdk/spdk_pid932138 00:32:08.871 Removing: /var/run/dpdk/spdk_pid933132 00:32:08.871 Removing: /var/run/dpdk/spdk_pid933583 00:32:08.871 Removing: /var/run/dpdk/spdk_pid935727 00:32:08.871 Removing: /var/run/dpdk/spdk_pid936444 00:32:08.871 Removing: /var/run/dpdk/spdk_pid937171 00:32:08.871 Removing: /var/run/dpdk/spdk_pid941210 00:32:08.871 Removing: /var/run/dpdk/spdk_pid951174 00:32:08.871 Removing: /var/run/dpdk/spdk_pid955007 00:32:08.871 Removing: /var/run/dpdk/spdk_pid961189 00:32:08.871 Removing: /var/run/dpdk/spdk_pid962492 00:32:08.871 Removing: /var/run/dpdk/spdk_pid964037 00:32:08.871 Removing: /var/run/dpdk/spdk_pid968334 00:32:08.871 Removing: /var/run/dpdk/spdk_pid972572 00:32:08.871 Removing: /var/run/dpdk/spdk_pid980325 00:32:08.871 Removing: /var/run/dpdk/spdk_pid980443 00:32:08.871 Removing: /var/run/dpdk/spdk_pid984929 00:32:08.871 Removing: /var/run/dpdk/spdk_pid985157 00:32:08.871 Removing: /var/run/dpdk/spdk_pid985393 00:32:08.871 Removing: /var/run/dpdk/spdk_pid985830 00:32:08.871 Removing: /var/run/dpdk/spdk_pid985858 00:32:08.871 Removing: /var/run/dpdk/spdk_pid990333 00:32:08.871 Removing: /var/run/dpdk/spdk_pid990897 00:32:08.871 Removing: /var/run/dpdk/spdk_pid995233 00:32:08.871 Removing: /var/run/dpdk/spdk_pid997988 00:32:08.871 Clean 00:32:08.871 01:32:31 -- common/autotest_common.sh@1451 -- # return 0 00:32:08.871 01:32:31 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:32:08.871 01:32:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:08.871 01:32:31 -- common/autotest_common.sh@10 -- # set +x 00:32:08.871 01:32:31 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:32:08.871 01:32:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:08.871 01:32:31 -- common/autotest_common.sh@10 -- # set +x 00:32:09.129 01:32:31 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:09.129 01:32:31 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:32:09.129 01:32:31 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:32:09.129 01:32:31 -- spdk/autotest.sh@391 -- # hash lcov 00:32:09.129 01:32:31 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:09.129 01:32:31 -- spdk/autotest.sh@393 -- # hostname 00:32:09.129 01:32:31 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:32:09.129 geninfo: WARNING: invalid characters removed from testname! 00:32:31.060 01:32:51 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:31.625 01:32:53 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:33.528 01:32:55 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:35.431 01:32:57 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:36.809 01:32:59 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:38.713 01:33:01 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:40.617 01:33:02 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:40.617 01:33:03 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:40.617 01:33:03 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:32:40.617 01:33:03 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:40.617 01:33:03 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:40.617 01:33:03 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.617 01:33:03 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.617 01:33:03 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.617 01:33:03 -- paths/export.sh@5 -- $ export PATH 00:32:40.617 01:33:03 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.617 01:33:03 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:32:40.617 01:33:03 -- common/autobuild_common.sh@444 -- $ date +%s 00:32:40.617 01:33:03 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721863983.XXXXXX 00:32:40.617 01:33:03 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721863983.TH6Mgw 00:32:40.617 01:33:03 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:32:40.617 01:33:03 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:32:40.617 01:33:03 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:32:40.617 01:33:03 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:32:40.617 01:33:03 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:32:40.617 01:33:03 -- common/autobuild_common.sh@460 -- $ get_config_params 00:32:40.617 01:33:03 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:32:40.617 01:33:03 -- common/autotest_common.sh@10 -- $ set +x 00:32:40.618 01:33:03 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:32:40.618 01:33:03 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:32:40.618 01:33:03 -- pm/common@17 -- $ local monitor 00:32:40.618 01:33:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:40.618 01:33:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:40.618 01:33:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:40.618 01:33:03 -- pm/common@21 -- $ date +%s 00:32:40.618 01:33:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:40.618 01:33:03 -- pm/common@21 -- $ date +%s 00:32:40.618 01:33:03 -- pm/common@21 -- $ date +%s 00:32:40.618 01:33:03 -- pm/common@25 -- $ sleep 1 00:32:40.618 01:33:03 -- pm/common@21 -- $ date +%s 00:32:40.618 01:33:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721863983 00:32:40.618 01:33:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721863983 00:32:40.618 01:33:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721863983 00:32:40.618 01:33:03 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721863983 00:32:40.618 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721863983_collect-vmstat.pm.log 00:32:40.618 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721863983_collect-cpu-load.pm.log 00:32:40.618 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721863983_collect-cpu-temp.pm.log 00:32:40.618 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721863983_collect-bmc-pm.bmc.pm.log 00:32:41.594 01:33:04 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:32:41.594 01:33:04 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:32:41.594 01:33:04 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:41.594 01:33:04 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:32:41.594 01:33:04 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:32:41.594 01:33:04 -- spdk/autopackage.sh@19 -- $ timing_finish 00:32:41.594 01:33:04 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:41.594 01:33:04 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:32:41.594 01:33:04 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:41.851 01:33:04 -- spdk/autopackage.sh@20 -- $ exit 0 00:32:41.851 01:33:04 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:32:41.851 01:33:04 -- pm/common@29 -- $ signal_monitor_resources TERM 00:32:41.851 01:33:04 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:32:41.852 01:33:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:41.852 01:33:04 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:32:41.852 01:33:04 -- pm/common@44 -- $ pid=1110902 00:32:41.852 01:33:04 -- pm/common@50 -- $ kill -TERM 1110902 00:32:41.852 01:33:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:41.852 01:33:04 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:32:41.852 01:33:04 -- pm/common@44 -- $ pid=1110903 00:32:41.852 01:33:04 -- pm/common@50 -- $ kill -TERM 1110903 00:32:41.852 01:33:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:41.852 01:33:04 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:32:41.852 01:33:04 -- pm/common@44 -- $ pid=1110905 00:32:41.852 01:33:04 -- pm/common@50 -- $ kill -TERM 1110905 00:32:41.852 01:33:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:41.852 01:33:04 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:32:41.852 01:33:04 -- pm/common@44 -- $ pid=1110928 00:32:41.852 01:33:04 -- pm/common@50 -- $ sudo -E kill -TERM 1110928 00:32:41.852 + [[ -n 611366 ]] 00:32:41.852 + sudo kill 611366 00:32:41.860 [Pipeline] } 00:32:41.878 [Pipeline] // stage 00:32:41.883 [Pipeline] } 00:32:41.899 [Pipeline] // timeout 00:32:41.904 [Pipeline] } 00:32:41.921 [Pipeline] // catchError 00:32:41.927 [Pipeline] } 00:32:41.944 [Pipeline] // wrap 00:32:41.951 [Pipeline] } 00:32:41.966 [Pipeline] // catchError 00:32:41.975 [Pipeline] stage 00:32:41.978 [Pipeline] { (Epilogue) 00:32:41.992 [Pipeline] catchError 00:32:41.993 [Pipeline] { 00:32:42.007 [Pipeline] echo 00:32:42.009 Cleanup processes 00:32:42.015 [Pipeline] sh 00:32:42.300 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:42.300 1111031 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:32:42.300 1111304 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:42.313 [Pipeline] sh 00:32:42.593 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:42.593 ++ grep -v 'sudo pgrep' 00:32:42.593 ++ awk '{print $1}' 00:32:42.593 + sudo kill -9 1111031 00:32:42.604 [Pipeline] sh 00:32:42.884 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:52.868 [Pipeline] sh 00:32:53.151 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:53.151 Artifacts sizes are good 00:32:53.165 [Pipeline] archiveArtifacts 00:32:53.172 Archiving artifacts 00:32:53.310 [Pipeline] sh 00:32:53.595 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:32:53.609 [Pipeline] cleanWs 00:32:53.619 [WS-CLEANUP] Deleting project workspace... 00:32:53.619 [WS-CLEANUP] Deferred wipeout is used... 00:32:53.626 [WS-CLEANUP] done 00:32:53.628 [Pipeline] } 00:32:53.642 [Pipeline] // catchError 00:32:53.653 [Pipeline] sh 00:32:53.939 + logger -p user.info -t JENKINS-CI 00:32:53.949 [Pipeline] } 00:32:53.965 [Pipeline] // stage 00:32:53.970 [Pipeline] } 00:32:53.987 [Pipeline] // node 00:32:53.993 [Pipeline] End of Pipeline 00:32:54.034 Finished: SUCCESS